Test Report: Docker_Linux_crio_arm64 22127

                    
                      087e852008767f332c662fe76eaa150bb5f9e6c8:2025-12-13:42757
                    
                

Test fail (55/412)

Order failed test Duration
38 TestAddons/serial/Volcano 0.3
44 TestAddons/parallel/Registry 16.07
45 TestAddons/parallel/RegistryCreds 0.47
46 TestAddons/parallel/Ingress 145.7
47 TestAddons/parallel/InspektorGadget 5.27
48 TestAddons/parallel/MetricsServer 5.39
50 TestAddons/parallel/CSI 39.81
51 TestAddons/parallel/Headlamp 3.16
52 TestAddons/parallel/CloudSpanner 6.3
53 TestAddons/parallel/LocalPath 8.41
54 TestAddons/parallel/NvidiaDevicePlugin 6.31
55 TestAddons/parallel/Yakd 5.27
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 501.93
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 368.64
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.39
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.34
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.48
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 735.65
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.13
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.05
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 2.08
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.06
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.44
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.71
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.43
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.11
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 100.99
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.05
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.26
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.27
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.33
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.31
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.28
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.48
293 TestJSONOutput/pause/Command 2.51
299 TestJSONOutput/unpause/Command 1.92
358 TestKubernetesUpgrade 791.51
384 TestPause/serial/Pause 6.67
399 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.6
406 TestStartStop/group/old-k8s-version/serial/Pause 6.45
412 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.69
417 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.47
424 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.57
427 TestStartStop/group/no-preload/serial/FirstStart 514.03
430 TestStartStop/group/embed-certs/serial/Pause 7.67
432 TestStartStop/group/newest-cni/serial/FirstStart 505.73
433 TestStartStop/group/no-preload/serial/DeployApp 3.2
434 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 122.13
436 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 97.81
439 TestStartStop/group/newest-cni/serial/SecondStart 376.2
442 TestStartStop/group/no-preload/serial/SecondStart 375.34
446 TestStartStop/group/newest-cni/serial/Pause 13.08
447 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.59
487 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 242.94
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-543946 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-543946 addons disable volcano --alsologtostderr -v=1: exit status 11 (297.682751ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:27:27.257768  363279 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:27:27.258512  363279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:27.258529  363279 out.go:374] Setting ErrFile to fd 2...
	I1213 10:27:27.258535  363279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:27.258975  363279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:27:27.259384  363279 mustload.go:66] Loading cluster: addons-543946
	I1213 10:27:27.260143  363279 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:27:27.260163  363279 addons.go:622] checking whether the cluster is paused
	I1213 10:27:27.260308  363279 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:27:27.260334  363279 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:27:27.261115  363279 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:27:27.279447  363279 ssh_runner.go:195] Run: systemctl --version
	I1213 10:27:27.279503  363279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:27:27.295995  363279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:27:27.405734  363279 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:27:27.405822  363279 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:27:27.437538  363279 cri.go:89] found id: "9773f1dabc6fdab993282d91bf08c5bfc6cdb97f43f4747d34fbeefb5a9e8428"
	I1213 10:27:27.437611  363279 cri.go:89] found id: "ad42e673ec298eaaa7db2c08f5f8920f71246374f8e3379263c303f8bec7be5f"
	I1213 10:27:27.437632  363279 cri.go:89] found id: "91959e0b370171d408bed0fa52d4779da678474a77f884e8f856aaa174adc963"
	I1213 10:27:27.437651  363279 cri.go:89] found id: "ade5c570c4dbe1b93fd2561bd1346b042babb17e2cf39dfd555bd5fd97e8b622"
	I1213 10:27:27.437687  363279 cri.go:89] found id: "7710a35bda17f7d94abb6d5449fdca661d858863ec7aaa9df850aa1ff0c8345a"
	I1213 10:27:27.437708  363279 cri.go:89] found id: "2cc901f4d3fb002e6510963d1c14958538efb9f5f9655d576a398653295bab78"
	I1213 10:27:27.437726  363279 cri.go:89] found id: "ead471b4c6339a21f7d4642f7382e3e17c6bc67840d5597c9f1ba7d03a90ad51"
	I1213 10:27:27.437743  363279 cri.go:89] found id: "59d844a8a4aeddcbc54c84b89e2b13932aafef3528c0c7ed2fe1d6977efc0da4"
	I1213 10:27:27.437761  363279 cri.go:89] found id: "0bcd4d507bd4ada1761081111aa585e0302721ff2aa31a88b8a4ed23ef769c46"
	I1213 10:27:27.437795  363279 cri.go:89] found id: "9df3579774fb7e75da64c468015d647c1c846c2c3a3661e9cad4e7625c077819"
	I1213 10:27:27.437820  363279 cri.go:89] found id: "f8f4b2d0d0ca01cb0298f234ed98f38bb35db421dd1ee0ecfa35f42af8a048ea"
	I1213 10:27:27.437841  363279 cri.go:89] found id: "82ec4f0d273933dd1ab10c4541bc8ece9fcf638d0fadf7561a9a044e0d84b3e3"
	I1213 10:27:27.437860  363279 cri.go:89] found id: "46a1e5bc6867179fb782aaa1b961e54bb568eba367df5af5c2b1a32cc3432bcf"
	I1213 10:27:27.437879  363279 cri.go:89] found id: "abd7fd4640572a8d631c9a4b53b0b54dbb8e15303b0aad7c22abe4c2fd31d2f9"
	I1213 10:27:27.437909  363279 cri.go:89] found id: "ca06350334c8245e57a53c1956fea31819d2cec020bfc2d72fdf601430141c8e"
	I1213 10:27:27.437943  363279 cri.go:89] found id: "d5c5cc43186b72930aa32f3cf24a96d8bf357cebf0358db4413edf761499d0af"
	I1213 10:27:27.437986  363279 cri.go:89] found id: "cc0f178df84bbe390a441a840f219d69e66c7fd3620de6752d3ee094c40cdd59"
	I1213 10:27:27.438022  363279 cri.go:89] found id: "40451bec4cc2625dade53eb6c1f0778cc9665d75785787a901b2ca8fe63f61db"
	I1213 10:27:27.438044  363279 cri.go:89] found id: "ca309cac66452ff14d4cade2b7a47b20ec31fc85df9461959e22811849d21fec"
	I1213 10:27:27.438063  363279 cri.go:89] found id: "051e9f414ee6e9ab31fd97afb8184da1ef222b1c4f7dd9b0735f3e6282f04624"
	I1213 10:27:27.438088  363279 cri.go:89] found id: "cf448155f622a981b7e826d6176c5a79e1d0b75a3b353485a3e5065aa49ad951"
	I1213 10:27:27.438107  363279 cri.go:89] found id: "76b7938d7fbe387df34dbe103158724b60d8446770ffc2084ed0c6d9dcca5419"
	I1213 10:27:27.438141  363279 cri.go:89] found id: "fe380896f1e4da12ca13e0f570ceba0fba5d8edef70e3de8c10f5159b3c36a8d"
	I1213 10:27:27.438158  363279 cri.go:89] found id: ""
	I1213 10:27:27.438246  363279 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:27:27.453682  363279 out.go:203] 
	W1213 10:27:27.456637  363279 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:27:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:27:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:27:27.456658  363279 out.go:285] * 
	* 
	W1213 10:27:27.462272  363279 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:27:27.465440  363279 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-543946 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 15.22128ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-w4p9x" [faf524b7-f1b3-484a-941d-99c4e0ea1742] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003450834s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-rd2tq" [509e84c1-a9e8-47b2-87dc-ce6324a1acdd] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003502793s
addons_test.go:394: (dbg) Run:  kubectl --context addons-543946 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-543946 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-543946 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.519837299s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-543946 ip
2025/12/13 10:27:54 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-543946 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-543946 addons disable registry --alsologtostderr -v=1: exit status 11 (278.580079ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:27:54.608186  364292 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:27:54.609030  364292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:54.609072  364292 out.go:374] Setting ErrFile to fd 2...
	I1213 10:27:54.609093  364292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:54.609490  364292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:27:54.610276  364292 mustload.go:66] Loading cluster: addons-543946
	I1213 10:27:54.610870  364292 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:27:54.610898  364292 addons.go:622] checking whether the cluster is paused
	I1213 10:27:54.611015  364292 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:27:54.611033  364292 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:27:54.611942  364292 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:27:54.630652  364292 ssh_runner.go:195] Run: systemctl --version
	I1213 10:27:54.630708  364292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:27:54.651117  364292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:27:54.758676  364292 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:27:54.758770  364292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:27:54.795434  364292 cri.go:89] found id: "9773f1dabc6fdab993282d91bf08c5bfc6cdb97f43f4747d34fbeefb5a9e8428"
	I1213 10:27:54.795459  364292 cri.go:89] found id: "ad42e673ec298eaaa7db2c08f5f8920f71246374f8e3379263c303f8bec7be5f"
	I1213 10:27:54.795464  364292 cri.go:89] found id: "91959e0b370171d408bed0fa52d4779da678474a77f884e8f856aaa174adc963"
	I1213 10:27:54.795468  364292 cri.go:89] found id: "ade5c570c4dbe1b93fd2561bd1346b042babb17e2cf39dfd555bd5fd97e8b622"
	I1213 10:27:54.795471  364292 cri.go:89] found id: "7710a35bda17f7d94abb6d5449fdca661d858863ec7aaa9df850aa1ff0c8345a"
	I1213 10:27:54.795475  364292 cri.go:89] found id: "2cc901f4d3fb002e6510963d1c14958538efb9f5f9655d576a398653295bab78"
	I1213 10:27:54.795478  364292 cri.go:89] found id: "ead471b4c6339a21f7d4642f7382e3e17c6bc67840d5597c9f1ba7d03a90ad51"
	I1213 10:27:54.795481  364292 cri.go:89] found id: "59d844a8a4aeddcbc54c84b89e2b13932aafef3528c0c7ed2fe1d6977efc0da4"
	I1213 10:27:54.795485  364292 cri.go:89] found id: "0bcd4d507bd4ada1761081111aa585e0302721ff2aa31a88b8a4ed23ef769c46"
	I1213 10:27:54.795491  364292 cri.go:89] found id: "9df3579774fb7e75da64c468015d647c1c846c2c3a3661e9cad4e7625c077819"
	I1213 10:27:54.795494  364292 cri.go:89] found id: "f8f4b2d0d0ca01cb0298f234ed98f38bb35db421dd1ee0ecfa35f42af8a048ea"
	I1213 10:27:54.795498  364292 cri.go:89] found id: "82ec4f0d273933dd1ab10c4541bc8ece9fcf638d0fadf7561a9a044e0d84b3e3"
	I1213 10:27:54.795503  364292 cri.go:89] found id: "46a1e5bc6867179fb782aaa1b961e54bb568eba367df5af5c2b1a32cc3432bcf"
	I1213 10:27:54.795506  364292 cri.go:89] found id: "abd7fd4640572a8d631c9a4b53b0b54dbb8e15303b0aad7c22abe4c2fd31d2f9"
	I1213 10:27:54.795528  364292 cri.go:89] found id: "ca06350334c8245e57a53c1956fea31819d2cec020bfc2d72fdf601430141c8e"
	I1213 10:27:54.795534  364292 cri.go:89] found id: "d5c5cc43186b72930aa32f3cf24a96d8bf357cebf0358db4413edf761499d0af"
	I1213 10:27:54.795538  364292 cri.go:89] found id: "cc0f178df84bbe390a441a840f219d69e66c7fd3620de6752d3ee094c40cdd59"
	I1213 10:27:54.795542  364292 cri.go:89] found id: "40451bec4cc2625dade53eb6c1f0778cc9665d75785787a901b2ca8fe63f61db"
	I1213 10:27:54.795546  364292 cri.go:89] found id: "ca309cac66452ff14d4cade2b7a47b20ec31fc85df9461959e22811849d21fec"
	I1213 10:27:54.795548  364292 cri.go:89] found id: "051e9f414ee6e9ab31fd97afb8184da1ef222b1c4f7dd9b0735f3e6282f04624"
	I1213 10:27:54.795554  364292 cri.go:89] found id: "cf448155f622a981b7e826d6176c5a79e1d0b75a3b353485a3e5065aa49ad951"
	I1213 10:27:54.795562  364292 cri.go:89] found id: "76b7938d7fbe387df34dbe103158724b60d8446770ffc2084ed0c6d9dcca5419"
	I1213 10:27:54.795565  364292 cri.go:89] found id: "fe380896f1e4da12ca13e0f570ceba0fba5d8edef70e3de8c10f5159b3c36a8d"
	I1213 10:27:54.795568  364292 cri.go:89] found id: ""
	I1213 10:27:54.795616  364292 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:27:54.822785  364292 out.go:203] 
	W1213 10:27:54.826613  364292 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:27:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:27:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:27:54.826703  364292 out.go:285] * 
	* 
	W1213 10:27:54.832371  364292 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:27:54.835384  364292 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-543946 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.07s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 7.144538ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-543946
addons_test.go:334: (dbg) Run:  kubectl --context addons-543946 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-543946 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-543946 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (249.603008ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:28:47.276976  365752 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:28:47.277752  365752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:28:47.277766  365752 out.go:374] Setting ErrFile to fd 2...
	I1213 10:28:47.277772  365752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:28:47.278039  365752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:28:47.278351  365752 mustload.go:66] Loading cluster: addons-543946
	I1213 10:28:47.278729  365752 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:28:47.278747  365752 addons.go:622] checking whether the cluster is paused
	I1213 10:28:47.278861  365752 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:28:47.278876  365752 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:28:47.279359  365752 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:28:47.295936  365752 ssh_runner.go:195] Run: systemctl --version
	I1213 10:28:47.296044  365752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:28:47.312698  365752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:28:47.419289  365752 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:28:47.419392  365752 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:28:47.448077  365752 cri.go:89] found id: "9773f1dabc6fdab993282d91bf08c5bfc6cdb97f43f4747d34fbeefb5a9e8428"
	I1213 10:28:47.448098  365752 cri.go:89] found id: "ad42e673ec298eaaa7db2c08f5f8920f71246374f8e3379263c303f8bec7be5f"
	I1213 10:28:47.448103  365752 cri.go:89] found id: "91959e0b370171d408bed0fa52d4779da678474a77f884e8f856aaa174adc963"
	I1213 10:28:47.448107  365752 cri.go:89] found id: "ade5c570c4dbe1b93fd2561bd1346b042babb17e2cf39dfd555bd5fd97e8b622"
	I1213 10:28:47.448119  365752 cri.go:89] found id: "7710a35bda17f7d94abb6d5449fdca661d858863ec7aaa9df850aa1ff0c8345a"
	I1213 10:28:47.448142  365752 cri.go:89] found id: "2cc901f4d3fb002e6510963d1c14958538efb9f5f9655d576a398653295bab78"
	I1213 10:28:47.448151  365752 cri.go:89] found id: "ead471b4c6339a21f7d4642f7382e3e17c6bc67840d5597c9f1ba7d03a90ad51"
	I1213 10:28:47.448154  365752 cri.go:89] found id: "59d844a8a4aeddcbc54c84b89e2b13932aafef3528c0c7ed2fe1d6977efc0da4"
	I1213 10:28:47.448158  365752 cri.go:89] found id: "0bcd4d507bd4ada1761081111aa585e0302721ff2aa31a88b8a4ed23ef769c46"
	I1213 10:28:47.448164  365752 cri.go:89] found id: "9df3579774fb7e75da64c468015d647c1c846c2c3a3661e9cad4e7625c077819"
	I1213 10:28:47.448172  365752 cri.go:89] found id: "f8f4b2d0d0ca01cb0298f234ed98f38bb35db421dd1ee0ecfa35f42af8a048ea"
	I1213 10:28:47.448175  365752 cri.go:89] found id: "82ec4f0d273933dd1ab10c4541bc8ece9fcf638d0fadf7561a9a044e0d84b3e3"
	I1213 10:28:47.448178  365752 cri.go:89] found id: "46a1e5bc6867179fb782aaa1b961e54bb568eba367df5af5c2b1a32cc3432bcf"
	I1213 10:28:47.448181  365752 cri.go:89] found id: "abd7fd4640572a8d631c9a4b53b0b54dbb8e15303b0aad7c22abe4c2fd31d2f9"
	I1213 10:28:47.448184  365752 cri.go:89] found id: "ca06350334c8245e57a53c1956fea31819d2cec020bfc2d72fdf601430141c8e"
	I1213 10:28:47.448189  365752 cri.go:89] found id: "d5c5cc43186b72930aa32f3cf24a96d8bf357cebf0358db4413edf761499d0af"
	I1213 10:28:47.448198  365752 cri.go:89] found id: "cc0f178df84bbe390a441a840f219d69e66c7fd3620de6752d3ee094c40cdd59"
	I1213 10:28:47.448203  365752 cri.go:89] found id: "40451bec4cc2625dade53eb6c1f0778cc9665d75785787a901b2ca8fe63f61db"
	I1213 10:28:47.448217  365752 cri.go:89] found id: "ca309cac66452ff14d4cade2b7a47b20ec31fc85df9461959e22811849d21fec"
	I1213 10:28:47.448222  365752 cri.go:89] found id: "051e9f414ee6e9ab31fd97afb8184da1ef222b1c4f7dd9b0735f3e6282f04624"
	I1213 10:28:47.448227  365752 cri.go:89] found id: "cf448155f622a981b7e826d6176c5a79e1d0b75a3b353485a3e5065aa49ad951"
	I1213 10:28:47.448233  365752 cri.go:89] found id: "76b7938d7fbe387df34dbe103158724b60d8446770ffc2084ed0c6d9dcca5419"
	I1213 10:28:47.448236  365752 cri.go:89] found id: "fe380896f1e4da12ca13e0f570ceba0fba5d8edef70e3de8c10f5159b3c36a8d"
	I1213 10:28:47.448239  365752 cri.go:89] found id: ""
	I1213 10:28:47.448297  365752 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:28:47.462730  365752 out.go:203] 
	W1213 10:28:47.465606  365752 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:28:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:28:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:28:47.465629  365752 out.go:285] * 
	* 
	W1213 10:28:47.471232  365752 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:28:47.474219  365752 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-543946 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-543946 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-543946 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-543946 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [93bb0d83-2263-4d73-9b09-32307d6adc64] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [93bb0d83-2263-4d73-9b09-32307d6adc64] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003948891s
I1213 10:28:17.165309  356328 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-543946 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-543946 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.960701112s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-543946 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-543946 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-543946
helpers_test.go:244: (dbg) docker inspect addons-543946:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "771f4b2573d1af3192d178e32d39d9fa5a4476bb767a301f6f5b8bfb5a73f1ef",
	        "Created": "2025-12-13T10:25:39.465172428Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 357716,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:25:39.530052869Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/771f4b2573d1af3192d178e32d39d9fa5a4476bb767a301f6f5b8bfb5a73f1ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/771f4b2573d1af3192d178e32d39d9fa5a4476bb767a301f6f5b8bfb5a73f1ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/771f4b2573d1af3192d178e32d39d9fa5a4476bb767a301f6f5b8bfb5a73f1ef/hosts",
	        "LogPath": "/var/lib/docker/containers/771f4b2573d1af3192d178e32d39d9fa5a4476bb767a301f6f5b8bfb5a73f1ef/771f4b2573d1af3192d178e32d39d9fa5a4476bb767a301f6f5b8bfb5a73f1ef-json.log",
	        "Name": "/addons-543946",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-543946:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-543946",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "771f4b2573d1af3192d178e32d39d9fa5a4476bb767a301f6f5b8bfb5a73f1ef",
	                "LowerDir": "/var/lib/docker/overlay2/5f2151df7cdf7bf89df314b1fbdcc90c9e3dd13aff68c767d933ee29b7c8ed75-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f2151df7cdf7bf89df314b1fbdcc90c9e3dd13aff68c767d933ee29b7c8ed75/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f2151df7cdf7bf89df314b1fbdcc90c9e3dd13aff68c767d933ee29b7c8ed75/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f2151df7cdf7bf89df314b1fbdcc90c9e3dd13aff68c767d933ee29b7c8ed75/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-543946",
	                "Source": "/var/lib/docker/volumes/addons-543946/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-543946",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-543946",
	                "name.minikube.sigs.k8s.io": "addons-543946",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3f5bc84306c158144616c677fe328bbfd36130bbf7da448e6a93d38bc5d815ac",
	            "SandboxKey": "/var/run/docker/netns/3f5bc84306c1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-543946": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:66:63:5f:ed:c7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "71fa05f4e527f76206845e71e9db32ded44f1cd6c1b919bffa94bb8f1644d952",
	                    "EndpointID": "324c7b3d8ec751a8f296d564483d83e6b9a4d29c6770af9def0a220bd30cdd5e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-543946",
	                        "771f4b2573d1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-543946 -n addons-543946
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-543946 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-543946 logs -n 25: (1.555460659s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-135245                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-135245 │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ start   │ --download-only -p binary-mirror-392613 --alsologtostderr --binary-mirror http://127.0.0.1:41447 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-392613   │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │                     │
	│ delete  │ -p binary-mirror-392613                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-392613   │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ addons  │ disable dashboard -p addons-543946                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │                     │
	│ addons  │ enable dashboard -p addons-543946                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │                     │
	│ start   │ -p addons-543946 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:27 UTC │
	│ addons  │ addons-543946 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │                     │
	│ addons  │ addons-543946 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │                     │
	│ addons  │ enable headlamp -p addons-543946 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │                     │
	│ addons  │ addons-543946 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │                     │
	│ addons  │ addons-543946 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │                     │
	│ addons  │ addons-543946 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │                     │
	│ ip      │ addons-543946 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │ 13 Dec 25 10:27 UTC │
	│ addons  │ addons-543946 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │                     │
	│ addons  │ addons-543946 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:28 UTC │                     │
	│ ssh     │ addons-543946 ssh cat /opt/local-path-provisioner/pvc-e9ca4193-326e-4213-a679-d66e1f982d49_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:28 UTC │ 13 Dec 25 10:28 UTC │
	│ addons  │ addons-543946 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:28 UTC │                     │
	│ addons  │ addons-543946 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:28 UTC │                     │
	│ ssh     │ addons-543946 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:28 UTC │                     │
	│ addons  │ addons-543946 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:28 UTC │                     │
	│ addons  │ addons-543946 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:28 UTC │                     │
	│ addons  │ addons-543946 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:28 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-543946                                                                                                                                                                                                                                                                                                                                                                                           │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:28 UTC │ 13 Dec 25 10:28 UTC │
	│ addons  │ addons-543946 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:28 UTC │                     │
	│ ip      │ addons-543946 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:30 UTC │ 13 Dec 25 10:30 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:25:14
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:25:14.468964  357320 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:25:14.469113  357320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:25:14.469147  357320 out.go:374] Setting ErrFile to fd 2...
	I1213 10:25:14.469158  357320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:25:14.469429  357320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:25:14.469923  357320 out.go:368] Setting JSON to false
	I1213 10:25:14.470739  357320 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7667,"bootTime":1765613848,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:25:14.470809  357320 start.go:143] virtualization:  
	I1213 10:25:14.474161  357320 out.go:179] * [addons-543946] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:25:14.478016  357320 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:25:14.478120  357320 notify.go:221] Checking for updates...
	I1213 10:25:14.483785  357320 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:25:14.486581  357320 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:25:14.489483  357320 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 10:25:14.492244  357320 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:25:14.495072  357320 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:25:14.498158  357320 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:25:14.524603  357320 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:25:14.524730  357320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:25:14.599845  357320 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 10:25:14.590760037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:25:14.599951  357320 docker.go:319] overlay module found
	I1213 10:25:14.603104  357320 out.go:179] * Using the docker driver based on user configuration
	I1213 10:25:14.606007  357320 start.go:309] selected driver: docker
	I1213 10:25:14.606026  357320 start.go:927] validating driver "docker" against <nil>
	I1213 10:25:14.606040  357320 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:25:14.606768  357320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:25:14.663116  357320 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 10:25:14.654057643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:25:14.663281  357320 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:25:14.663559  357320 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:25:14.666501  357320 out.go:179] * Using Docker driver with root privileges
	I1213 10:25:14.669244  357320 cni.go:84] Creating CNI manager for ""
	I1213 10:25:14.669313  357320 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:25:14.669326  357320 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 10:25:14.669412  357320 start.go:353] cluster config:
	{Name:addons-543946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-543946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1213 10:25:14.674299  357320 out.go:179] * Starting "addons-543946" primary control-plane node in "addons-543946" cluster
	I1213 10:25:14.677120  357320 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:25:14.679984  357320 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:25:14.682759  357320 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 10:25:14.682811  357320 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 10:25:14.682826  357320 cache.go:65] Caching tarball of preloaded images
	I1213 10:25:14.682854  357320 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:25:14.682920  357320 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:25:14.682931  357320 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 10:25:14.683294  357320 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/config.json ...
	I1213 10:25:14.683328  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/config.json: {Name:mk5b74fbe0050f60fa211ab2c491db2cebc68da2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:14.698697  357320 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 10:25:14.698837  357320 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 10:25:14.698856  357320 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1213 10:25:14.698861  357320 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1213 10:25:14.698868  357320 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1213 10:25:14.698873  357320 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from local cache
	I1213 10:25:32.594892  357320 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from cached tarball
	I1213 10:25:32.594950  357320 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:25:32.594992  357320 start.go:360] acquireMachinesLock for addons-543946: {Name:mk28b673a92918c927bb67ea3cd59db53631e327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:25:32.595112  357320 start.go:364] duration metric: took 94.861µs to acquireMachinesLock for "addons-543946"
	I1213 10:25:32.595143  357320 start.go:93] Provisioning new machine with config: &{Name:addons-543946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-543946 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:25:32.595215  357320 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:25:32.598702  357320 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1213 10:25:32.598957  357320 start.go:159] libmachine.API.Create for "addons-543946" (driver="docker")
	I1213 10:25:32.599004  357320 client.go:173] LocalClient.Create starting
	I1213 10:25:32.599132  357320 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem
	I1213 10:25:32.656508  357320 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem
	I1213 10:25:33.005921  357320 cli_runner.go:164] Run: docker network inspect addons-543946 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:25:33.026544  357320 cli_runner.go:211] docker network inspect addons-543946 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:25:33.026645  357320 network_create.go:284] running [docker network inspect addons-543946] to gather additional debugging logs...
	I1213 10:25:33.026668  357320 cli_runner.go:164] Run: docker network inspect addons-543946
	W1213 10:25:33.043121  357320 cli_runner.go:211] docker network inspect addons-543946 returned with exit code 1
	I1213 10:25:33.043148  357320 network_create.go:287] error running [docker network inspect addons-543946]: docker network inspect addons-543946: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-543946 not found
	I1213 10:25:33.043170  357320 network_create.go:289] output of [docker network inspect addons-543946]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-543946 not found
	
	** /stderr **
	I1213 10:25:33.043270  357320 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:25:33.058823  357320 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a51390}
	I1213 10:25:33.058863  357320 network_create.go:124] attempt to create docker network addons-543946 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 10:25:33.058920  357320 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-543946 addons-543946
	I1213 10:25:33.118806  357320 network_create.go:108] docker network addons-543946 192.168.49.0/24 created
	I1213 10:25:33.118838  357320 kic.go:121] calculated static IP "192.168.49.2" for the "addons-543946" container
	I1213 10:25:33.118927  357320 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:25:33.135606  357320 cli_runner.go:164] Run: docker volume create addons-543946 --label name.minikube.sigs.k8s.io=addons-543946 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:25:33.152664  357320 oci.go:103] Successfully created a docker volume addons-543946
	I1213 10:25:33.152768  357320 cli_runner.go:164] Run: docker run --rm --name addons-543946-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-543946 --entrypoint /usr/bin/test -v addons-543946:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:25:35.445039  357320 cli_runner.go:217] Completed: docker run --rm --name addons-543946-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-543946 --entrypoint /usr/bin/test -v addons-543946:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (2.292225539s)
	I1213 10:25:35.445070  357320 oci.go:107] Successfully prepared a docker volume addons-543946
	I1213 10:25:35.445113  357320 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 10:25:35.445130  357320 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:25:35.445206  357320 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-543946:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 10:25:39.385245  357320 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-543946:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.939998045s)
	I1213 10:25:39.385280  357320 kic.go:203] duration metric: took 3.940147305s to extract preloaded images to volume ...
	W1213 10:25:39.385435  357320 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 10:25:39.385548  357320 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 10:25:39.450481  357320 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-543946 --name addons-543946 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-543946 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-543946 --network addons-543946 --ip 192.168.49.2 --volume addons-543946:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 10:25:39.749172  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Running}}
	I1213 10:25:39.770800  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:25:39.793041  357320 cli_runner.go:164] Run: docker exec addons-543946 stat /var/lib/dpkg/alternatives/iptables
	I1213 10:25:39.840908  357320 oci.go:144] the created container "addons-543946" has a running status.
	I1213 10:25:39.840935  357320 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa...
	I1213 10:25:40.027022  357320 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 10:25:40.053560  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:25:40.074585  357320 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 10:25:40.074608  357320 kic_runner.go:114] Args: [docker exec --privileged addons-543946 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 10:25:40.149438  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:25:40.169585  357320 machine.go:94] provisionDockerMachine start ...
	I1213 10:25:40.169673  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:25:40.194565  357320 main.go:143] libmachine: Using SSH client type: native
	I1213 10:25:40.194885  357320 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1213 10:25:40.194900  357320 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:25:40.195481  357320 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 10:25:43.346827  357320 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-543946
	
	I1213 10:25:43.346848  357320 ubuntu.go:182] provisioning hostname "addons-543946"
	I1213 10:25:43.346925  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:25:43.362973  357320 main.go:143] libmachine: Using SSH client type: native
	I1213 10:25:43.363401  357320 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1213 10:25:43.363416  357320 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-543946 && echo "addons-543946" | sudo tee /etc/hostname
	I1213 10:25:43.525151  357320 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-543946
	
	I1213 10:25:43.525230  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:25:43.542888  357320 main.go:143] libmachine: Using SSH client type: native
	I1213 10:25:43.543200  357320 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1213 10:25:43.543221  357320 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-543946' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-543946/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-543946' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:25:43.691780  357320 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:25:43.691812  357320 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 10:25:43.691844  357320 ubuntu.go:190] setting up certificates
	I1213 10:25:43.691868  357320 provision.go:84] configureAuth start
	I1213 10:25:43.691938  357320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-543946
	I1213 10:25:43.709725  357320 provision.go:143] copyHostCerts
	I1213 10:25:43.709807  357320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 10:25:43.709941  357320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 10:25:43.710017  357320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 10:25:43.710073  357320 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.addons-543946 san=[127.0.0.1 192.168.49.2 addons-543946 localhost minikube]
	I1213 10:25:44.035865  357320 provision.go:177] copyRemoteCerts
	I1213 10:25:44.035938  357320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:25:44.035979  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:25:44.052955  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:25:44.155120  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:25:44.173298  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 10:25:44.190575  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1213 10:25:44.208236  357320 provision.go:87] duration metric: took 516.350062ms to configureAuth
	I1213 10:25:44.208308  357320 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:25:44.208530  357320 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:25:44.208647  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:25:44.225447  357320 main.go:143] libmachine: Using SSH client type: native
	I1213 10:25:44.225772  357320 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1213 10:25:44.225793  357320 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:25:44.526873  357320 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:25:44.526949  357320 machine.go:97] duration metric: took 4.357340475s to provisionDockerMachine
	I1213 10:25:44.526975  357320 client.go:176] duration metric: took 11.927964579s to LocalClient.Create
	I1213 10:25:44.527006  357320 start.go:167] duration metric: took 11.928050964s to libmachine.API.Create "addons-543946"
	I1213 10:25:44.527026  357320 start.go:293] postStartSetup for "addons-543946" (driver="docker")
	I1213 10:25:44.527060  357320 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:25:44.527146  357320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:25:44.527255  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:25:44.545053  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:25:44.647464  357320 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:25:44.650899  357320 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:25:44.650930  357320 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:25:44.650946  357320 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 10:25:44.651015  357320 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 10:25:44.651046  357320 start.go:296] duration metric: took 123.99285ms for postStartSetup
	I1213 10:25:44.651357  357320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-543946
	I1213 10:25:44.668786  357320 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/config.json ...
	I1213 10:25:44.669064  357320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:25:44.669112  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:25:44.685271  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:25:44.788779  357320 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:25:44.793756  357320 start.go:128] duration metric: took 12.198525799s to createHost
	I1213 10:25:44.793788  357320 start.go:83] releasing machines lock for "addons-543946", held for 12.198660726s
	I1213 10:25:44.793867  357320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-543946
	I1213 10:25:44.810450  357320 ssh_runner.go:195] Run: cat /version.json
	I1213 10:25:44.810512  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:25:44.810619  357320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:25:44.810676  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:25:44.837823  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:25:44.845128  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:25:44.943087  357320 ssh_runner.go:195] Run: systemctl --version
	I1213 10:25:45.039338  357320 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:25:45.100786  357320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:25:45.107357  357320 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:25:45.107487  357320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:25:45.145608  357320 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 10:25:45.145674  357320 start.go:496] detecting cgroup driver to use...
	I1213 10:25:45.145715  357320 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:25:45.145835  357320 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:25:45.168659  357320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:25:45.186885  357320 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:25:45.187143  357320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:25:45.215981  357320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:25:45.243820  357320 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:25:45.388850  357320 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:25:45.514000  357320 docker.go:234] disabling docker service ...
	I1213 10:25:45.514069  357320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:25:45.535779  357320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:25:45.548933  357320 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:25:45.668606  357320 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:25:45.786652  357320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:25:45.799607  357320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:25:45.813905  357320 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:25:45.814004  357320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:25:45.822663  357320 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:25:45.822765  357320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:25:45.831646  357320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:25:45.840368  357320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:25:45.849462  357320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:25:45.857790  357320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:25:45.866853  357320 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:25:45.880512  357320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:25:45.889274  357320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:25:45.896691  357320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:25:45.904448  357320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:25:46.011550  357320 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:25:46.187122  357320 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:25:46.187208  357320 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:25:46.190936  357320 start.go:564] Will wait 60s for crictl version
	I1213 10:25:46.191003  357320 ssh_runner.go:195] Run: which crictl
	I1213 10:25:46.194180  357320 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:25:46.221026  357320 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:25:46.221124  357320 ssh_runner.go:195] Run: crio --version
	I1213 10:25:46.250390  357320 ssh_runner.go:195] Run: crio --version
	I1213 10:25:46.280620  357320 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 10:25:46.283403  357320 cli_runner.go:164] Run: docker network inspect addons-543946 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:25:46.299585  357320 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:25:46.303238  357320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:25:46.312849  357320 kubeadm.go:884] updating cluster {Name:addons-543946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-543946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:25:46.312979  357320 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 10:25:46.313038  357320 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:25:46.345145  357320 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:25:46.345167  357320 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:25:46.345221  357320 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:25:46.382599  357320 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:25:46.382624  357320 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:25:46.382632  357320 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1213 10:25:46.382719  357320 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-543946 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-543946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:25:46.382804  357320 ssh_runner.go:195] Run: crio config
	I1213 10:25:46.437296  357320 cni.go:84] Creating CNI manager for ""
	I1213 10:25:46.437321  357320 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:25:46.437365  357320 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:25:46.437394  357320 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-543946 NodeName:addons-543946 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:25:46.437524  357320 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-543946"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:25:46.437600  357320 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 10:25:46.445581  357320 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:25:46.445654  357320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:25:46.454626  357320 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 10:25:46.467498  357320 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 10:25:46.480238  357320 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1213 10:25:46.492792  357320 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:25:46.496499  357320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:25:46.506054  357320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:25:46.611624  357320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:25:46.626683  357320 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946 for IP: 192.168.49.2
	I1213 10:25:46.626706  357320 certs.go:195] generating shared ca certs ...
	I1213 10:25:46.626722  357320 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:46.626913  357320 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 10:25:47.204768  357320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt ...
	I1213 10:25:47.204805  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt: {Name:mk40527cd6a78d6865530eda3515d7d66bc3735f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:47.205005  357320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key ...
	I1213 10:25:47.205018  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key: {Name:mkedfc6b0347ec89e97cad1eedd0013496b4a5aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:47.205107  357320 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 10:25:47.439029  357320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt ...
	I1213 10:25:47.439057  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt: {Name:mk9420c8b224fa9f09e2c198603b8e1c2c54b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:47.439237  357320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key ...
	I1213 10:25:47.439250  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key: {Name:mk736a395033f19b2378469d93d84caf4d9f9094 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:47.439331  357320 certs.go:257] generating profile certs ...
	I1213 10:25:47.439393  357320 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.key
	I1213 10:25:47.439410  357320 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt with IP's: []
	I1213 10:25:47.565758  357320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt ...
	I1213 10:25:47.565792  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: {Name:mkc757bbee111d3d94e08f102e6b9051de83f356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:47.565986  357320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.key ...
	I1213 10:25:47.566000  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.key: {Name:mk1b27f6c7da454226a68ac3488e27ecfef1f4a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:47.566088  357320 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.key.736b28ae
	I1213 10:25:47.566113  357320 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.crt.736b28ae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 10:25:47.757570  357320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.crt.736b28ae ...
	I1213 10:25:47.757603  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.crt.736b28ae: {Name:mk9cb03e9bf28afc834243a7959df21e4d0904d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:47.757782  357320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.key.736b28ae ...
	I1213 10:25:47.757796  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.key.736b28ae: {Name:mkaf5350f7e2fa2bca7302c044ea91647c8e6a27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:47.757882  357320 certs.go:382] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.crt.736b28ae -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.crt
	I1213 10:25:47.757961  357320 certs.go:386] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.key.736b28ae -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.key
	I1213 10:25:47.758016  357320 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/proxy-client.key
	I1213 10:25:47.758036  357320 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/proxy-client.crt with IP's: []
	I1213 10:25:47.952145  357320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/proxy-client.crt ...
	I1213 10:25:47.952173  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/proxy-client.crt: {Name:mk378eb64df056a2196f869ba6c51c0c990ec56f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:47.952379  357320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/proxy-client.key ...
	I1213 10:25:47.952394  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/proxy-client.key: {Name:mke175128fe2051803c7e5af81e699c14acdccba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:47.952590  357320 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:25:47.952634  357320 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:25:47.952665  357320 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:25:47.952699  357320 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 10:25:47.953323  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:25:47.972174  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:25:47.989840  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:25:48.008028  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:25:48.029208  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 10:25:48.048588  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:25:48.067664  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:25:48.086397  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 10:25:48.104915  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:25:48.124398  357320 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:25:48.138229  357320 ssh_runner.go:195] Run: openssl version
	I1213 10:25:48.144725  357320 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:25:48.152484  357320 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:25:48.160202  357320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:25:48.164143  357320 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:25:48.164210  357320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:25:48.205363  357320 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:25:48.212881  357320 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 10:25:48.220348  357320 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:25:48.224037  357320 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 10:25:48.224089  357320 kubeadm.go:401] StartCluster: {Name:addons-543946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-543946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:25:48.224176  357320 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:25:48.224247  357320 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:25:48.251200  357320 cri.go:89] found id: ""
	I1213 10:25:48.251342  357320 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:25:48.259403  357320 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:25:48.267355  357320 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:25:48.267424  357320 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:25:48.279121  357320 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:25:48.279141  357320 kubeadm.go:158] found existing configuration files:
	
	I1213 10:25:48.279195  357320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:25:48.288126  357320 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:25:48.288190  357320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:25:48.296205  357320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:25:48.304944  357320 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:25:48.305008  357320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:25:48.312915  357320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:25:48.321585  357320 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:25:48.321648  357320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:25:48.330472  357320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:25:48.338316  357320 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:25:48.338418  357320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:25:48.346331  357320 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:25:48.386720  357320 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 10:25:48.386795  357320 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:25:48.411758  357320 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:25:48.411860  357320 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:25:48.411947  357320 kubeadm.go:319] OS: Linux
	I1213 10:25:48.412031  357320 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:25:48.412123  357320 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:25:48.412183  357320 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:25:48.412239  357320 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:25:48.412293  357320 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:25:48.412364  357320 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:25:48.412459  357320 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:25:48.412551  357320 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:25:48.412643  357320 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:25:48.479655  357320 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:25:48.479859  357320 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:25:48.479992  357320 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:25:48.489680  357320 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:25:48.496275  357320 out.go:252]   - Generating certificates and keys ...
	I1213 10:25:48.496389  357320 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:25:48.496473  357320 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:25:49.351627  357320 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 10:25:49.740615  357320 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 10:25:50.062262  357320 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 10:25:50.332710  357320 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 10:25:50.876472  357320 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 10:25:50.876812  357320 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-543946 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 10:25:51.864564  357320 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 10:25:51.864940  357320 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-543946 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 10:25:53.530037  357320 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 10:25:53.764404  357320 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 10:25:54.122075  357320 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 10:25:54.122354  357320 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:25:54.715300  357320 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:25:55.724056  357320 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:25:56.134010  357320 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:25:56.555160  357320 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:25:56.972558  357320 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:25:56.973516  357320 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:25:56.976881  357320 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:25:56.980285  357320 out.go:252]   - Booting up control plane ...
	I1213 10:25:56.980389  357320 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:25:56.980465  357320 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:25:56.981161  357320 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:25:56.996591  357320 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:25:56.996965  357320 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:25:57.005948  357320 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:25:57.006050  357320 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:25:57.006088  357320 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:25:57.140027  357320 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:25:57.140151  357320 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:25:58.638858  357320 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501736903s
	I1213 10:25:58.645601  357320 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 10:25:58.645702  357320 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1213 10:25:58.645792  357320 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 10:25:58.645871  357320 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 10:26:01.360702  357320 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.714763995s
	I1213 10:26:02.990623  357320 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.345025068s
	I1213 10:26:04.647832  357320 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002027628s
	I1213 10:26:04.682682  357320 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 10:26:04.697532  357320 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 10:26:04.712309  357320 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 10:26:04.712534  357320 kubeadm.go:319] [mark-control-plane] Marking the node addons-543946 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 10:26:04.724663  357320 kubeadm.go:319] [bootstrap-token] Using token: gouzdj.5i63bgisvk0e7a0d
	I1213 10:26:04.727724  357320 out.go:252]   - Configuring RBAC rules ...
	I1213 10:26:04.727853  357320 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 10:26:04.732435  357320 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 10:26:04.742183  357320 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 10:26:04.746372  357320 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 10:26:04.750568  357320 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 10:26:04.754837  357320 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 10:26:05.054398  357320 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 10:26:05.487328  357320 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 10:26:06.054936  357320 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 10:26:06.056340  357320 kubeadm.go:319] 
	I1213 10:26:06.056416  357320 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 10:26:06.056426  357320 kubeadm.go:319] 
	I1213 10:26:06.056500  357320 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 10:26:06.056508  357320 kubeadm.go:319] 
	I1213 10:26:06.056532  357320 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 10:26:06.056591  357320 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 10:26:06.056643  357320 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 10:26:06.056651  357320 kubeadm.go:319] 
	I1213 10:26:06.056702  357320 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 10:26:06.056711  357320 kubeadm.go:319] 
	I1213 10:26:06.056756  357320 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 10:26:06.056764  357320 kubeadm.go:319] 
	I1213 10:26:06.056814  357320 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 10:26:06.056889  357320 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 10:26:06.056957  357320 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 10:26:06.056984  357320 kubeadm.go:319] 
	I1213 10:26:06.057069  357320 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 10:26:06.057146  357320 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 10:26:06.057154  357320 kubeadm.go:319] 
	I1213 10:26:06.057234  357320 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token gouzdj.5i63bgisvk0e7a0d \
	I1213 10:26:06.057339  357320 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a3798e8f4868c7e4585b4327b4f0565e5125112465fbf26ae2f7c9b7fec5e169 \
	I1213 10:26:06.057364  357320 kubeadm.go:319] 	--control-plane 
	I1213 10:26:06.057370  357320 kubeadm.go:319] 
	I1213 10:26:06.057451  357320 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 10:26:06.057457  357320 kubeadm.go:319] 
	I1213 10:26:06.057535  357320 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token gouzdj.5i63bgisvk0e7a0d \
	I1213 10:26:06.057636  357320 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a3798e8f4868c7e4585b4327b4f0565e5125112465fbf26ae2f7c9b7fec5e169 
	I1213 10:26:06.061027  357320 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 10:26:06.061297  357320 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:26:06.061421  357320 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:26:06.061436  357320 cni.go:84] Creating CNI manager for ""
	I1213 10:26:06.061444  357320 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:26:06.064800  357320 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 10:26:06.067883  357320 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 10:26:06.071940  357320 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 10:26:06.071965  357320 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1213 10:26:06.087252  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 10:26:06.381023  357320 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 10:26:06.381145  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:26:06.381211  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-543946 minikube.k8s.io/updated_at=2025_12_13T10_26_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b minikube.k8s.io/name=addons-543946 minikube.k8s.io/primary=true
	I1213 10:26:06.513048  357320 ops.go:34] apiserver oom_adj: -16
	I1213 10:26:06.524875  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:26:07.025575  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:26:07.525003  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:26:08.024981  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:26:08.524942  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:26:09.025082  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:26:09.525042  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:26:10.025671  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:26:10.525267  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:26:10.681837  357320 kubeadm.go:1114] duration metric: took 4.300736591s to wait for elevateKubeSystemPrivileges
	I1213 10:26:10.681865  357320 kubeadm.go:403] duration metric: took 22.457782572s to StartCluster
	I1213 10:26:10.681882  357320 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:26:10.681996  357320 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:26:10.682356  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:26:10.682532  357320 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:26:10.682724  357320 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 10:26:10.682975  357320 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:26:10.683008  357320 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1213 10:26:10.683089  357320 addons.go:70] Setting yakd=true in profile "addons-543946"
	I1213 10:26:10.683102  357320 addons.go:239] Setting addon yakd=true in "addons-543946"
	I1213 10:26:10.683122  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.683606  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.684095  357320 addons.go:70] Setting metrics-server=true in profile "addons-543946"
	I1213 10:26:10.684112  357320 addons.go:239] Setting addon metrics-server=true in "addons-543946"
	I1213 10:26:10.684134  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.684557  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.684704  357320 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-543946"
	I1213 10:26:10.684732  357320 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-543946"
	I1213 10:26:10.684757  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.685206  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.688188  357320 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-543946"
	I1213 10:26:10.691278  357320 addons.go:70] Setting cloud-spanner=true in profile "addons-543946"
	I1213 10:26:10.692684  357320 addons.go:239] Setting addon cloud-spanner=true in "addons-543946"
	I1213 10:26:10.692717  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.693185  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.693369  357320 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-543946"
	I1213 10:26:10.693460  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.691303  357320 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-543946"
	I1213 10:26:10.694677  357320 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-543946"
	I1213 10:26:10.694702  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.695096  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.695938  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.691313  357320 out.go:179] * Verifying Kubernetes components...
	I1213 10:26:10.691341  357320 addons.go:70] Setting default-storageclass=true in profile "addons-543946"
	I1213 10:26:10.720469  357320 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-543946"
	I1213 10:26:10.720855  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.726394  357320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:26:10.691355  357320 addons.go:70] Setting gcp-auth=true in profile "addons-543946"
	I1213 10:26:10.726808  357320 mustload.go:66] Loading cluster: addons-543946
	I1213 10:26:10.727012  357320 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:26:10.727272  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.691364  357320 addons.go:70] Setting ingress=true in profile "addons-543946"
	I1213 10:26:10.740157  357320 addons.go:239] Setting addon ingress=true in "addons-543946"
	I1213 10:26:10.740205  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.740689  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.691370  357320 addons.go:70] Setting ingress-dns=true in profile "addons-543946"
	I1213 10:26:10.764840  357320 addons.go:239] Setting addon ingress-dns=true in "addons-543946"
	I1213 10:26:10.764890  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.765377  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.791284  357320 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1213 10:26:10.691376  357320 addons.go:70] Setting inspektor-gadget=true in profile "addons-543946"
	I1213 10:26:10.691613  357320 addons.go:70] Setting registry=true in profile "addons-543946"
	I1213 10:26:10.691621  357320 addons.go:70] Setting registry-creds=true in profile "addons-543946"
	I1213 10:26:10.691632  357320 addons.go:70] Setting storage-provisioner=true in profile "addons-543946"
	I1213 10:26:10.691638  357320 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-543946"
	I1213 10:26:10.691644  357320 addons.go:70] Setting volcano=true in profile "addons-543946"
	I1213 10:26:10.691649  357320 addons.go:70] Setting volumesnapshots=true in profile "addons-543946"
	I1213 10:26:10.797984  357320 addons.go:239] Setting addon volumesnapshots=true in "addons-543946"
	I1213 10:26:10.798060  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.806530  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.814383  357320 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1213 10:26:10.814435  357320 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 10:26:10.814446  357320 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 10:26:10.814508  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:10.843915  357320 addons.go:239] Setting addon inspektor-gadget=true in "addons-543946"
	I1213 10:26:10.844012  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.844522  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.844844  357320 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 10:26:10.844858  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1213 10:26:10.844901  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:10.862644  357320 addons.go:239] Setting addon registry=true in "addons-543946"
	I1213 10:26:10.862744  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.863262  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.879356  357320 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1213 10:26:10.882491  357320 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1213 10:26:10.882520  357320 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1213 10:26:10.882597  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:10.895612  357320 addons.go:239] Setting addon registry-creds=true in "addons-543946"
	I1213 10:26:10.895676  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.896227  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.904490  357320 addons.go:239] Setting addon storage-provisioner=true in "addons-543946"
	I1213 10:26:10.904537  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.905059  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.927812  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.928900  357320 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-543946"
	I1213 10:26:10.929213  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.930113  357320 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1213 10:26:10.930269  357320 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1213 10:26:10.953194  357320 addons.go:239] Setting addon volcano=true in "addons-543946"
	I1213 10:26:10.953249  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.953739  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.960169  357320 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1213 10:26:10.966192  357320 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1213 10:26:10.974908  357320 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1213 10:26:10.980089  357320 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1213 10:26:10.981701  357320 addons.go:239] Setting addon default-storageclass=true in "addons-543946"
	I1213 10:26:10.981738  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.982138  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.987157  357320 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1213 10:26:10.987179  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1213 10:26:10.987243  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.023794  357320 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 10:26:11.023818  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1213 10:26:11.023884  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.070613  357320 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1213 10:26:11.091585  357320 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1213 10:26:11.098421  357320 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1213 10:26:11.106889  357320 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1213 10:26:11.107069  357320 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1213 10:26:11.107971  357320 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1213 10:26:11.110121  357320 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 10:26:11.110145  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1213 10:26:11.110204  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.131979  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.132833  357320 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1213 10:26:11.139763  357320 out.go:179]   - Using image docker.io/registry:3.0.0
	I1213 10:26:11.139895  357320 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1213 10:26:11.142739  357320 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1213 10:26:11.143912  357320 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1213 10:26:11.143963  357320 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1213 10:26:11.144781  357320 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1213 10:26:11.144793  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1213 10:26:11.144851  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.145027  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.154286  357320 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:26:11.155305  357320 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1213 10:26:11.155322  357320 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1213 10:26:11.155388  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.155654  357320 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1213 10:26:11.155668  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1213 10:26:11.155707  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.172967  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.173726  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.174130  357320 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:26:11.174141  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:26:11.174195  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.175747  357320 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 10:26:11.180973  357320 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 10:26:11.187668  357320 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 10:26:11.187701  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1213 10:26:11.187766  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.211055  357320 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1213 10:26:11.219969  357320 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-543946"
	I1213 10:26:11.220015  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:11.220425  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:11.222791  357320 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 10:26:11.222819  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1213 10:26:11.222880  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.266832  357320 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:26:11.266853  357320 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:26:11.266911  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	W1213 10:26:11.273062  357320 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1213 10:26:11.288343  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.302632  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.336848  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.343810  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.363675  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.398834  357320 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 10:26:11.399215  357320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:26:11.407761  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.418478  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.427003  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.435686  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.436863  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.442711  357320 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1213 10:26:11.445695  357320 out.go:179]   - Using image docker.io/busybox:stable
	I1213 10:26:11.448047  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	W1213 10:26:11.448592  357320 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1213 10:26:11.448615  357320 retry.go:31] will retry after 361.072928ms: ssh: handshake failed: EOF
	W1213 10:26:11.448651  357320 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1213 10:26:11.448657  357320 retry.go:31] will retry after 252.300133ms: ssh: handshake failed: EOF
	I1213 10:26:11.448813  357320 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 10:26:11.448825  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1213 10:26:11.448880  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.477613  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.634678  357320 node_ready.go:35] waiting up to 6m0s for node "addons-543946" to be "Ready" ...
	I1213 10:26:11.918381  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 10:26:11.932500  357320 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1213 10:26:11.932577  357320 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1213 10:26:12.044036  357320 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 10:26:12.044097  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1213 10:26:12.135634  357320 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1213 10:26:12.135711  357320 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1213 10:26:12.243769  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 10:26:12.278423  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1213 10:26:12.284695  357320 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 10:26:12.284771  357320 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 10:26:12.420991  357320 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1213 10:26:12.421019  357320 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1213 10:26:12.435709  357320 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1213 10:26:12.435737  357320 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1213 10:26:12.463616  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 10:26:12.528560  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1213 10:26:12.534863  357320 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1213 10:26:12.534889  357320 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1213 10:26:12.560606  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 10:26:12.579799  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 10:26:12.587299  357320 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1213 10:26:12.587325  357320 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1213 10:26:12.588879  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 10:26:12.612020  357320 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 10:26:12.612058  357320 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 10:26:12.715274  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:26:12.744114  357320 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1213 10:26:12.744184  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1213 10:26:12.837633  357320 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1213 10:26:12.837656  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1213 10:26:12.839663  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1213 10:26:12.909186  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:26:12.941000  357320 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1213 10:26:12.941033  357320 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1213 10:26:12.961032  357320 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1213 10:26:12.961064  357320 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1213 10:26:13.018907  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 10:26:13.176800  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1213 10:26:13.207996  357320 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1213 10:26:13.208032  357320 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1213 10:26:13.317285  357320 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1213 10:26:13.317327  357320 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1213 10:26:13.601366  357320 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1213 10:26:13.601409  357320 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1213 10:26:13.635584  357320 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1213 10:26:13.635655  357320 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	W1213 10:26:13.638229  357320 node_ready.go:57] node "addons-543946" has "Ready":"False" status (will retry)
	I1213 10:26:13.862214  357320 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 10:26:13.862289  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1213 10:26:14.193816  357320 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1213 10:26:14.193902  357320 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1213 10:26:14.250643  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 10:26:14.447126  357320 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.048208521s)
	I1213 10:26:14.447272  357320 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1213 10:26:14.447246  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.5287896s)
	I1213 10:26:14.661440  357320 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1213 10:26:14.661516  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1213 10:26:14.970327  357320 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-543946" context rescaled to 1 replicas
	I1213 10:26:15.006643  357320 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1213 10:26:15.006684  357320 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1213 10:26:15.210099  357320 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1213 10:26:15.210124  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1213 10:26:15.390775  357320 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1213 10:26:15.390846  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1213 10:26:15.638187  357320 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 10:26:15.638261  357320 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	W1213 10:26:15.659167  357320 node_ready.go:57] node "addons-543946" has "Ready":"False" status (will retry)
	I1213 10:26:15.773136  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 10:26:16.722494  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.478638329s)
	I1213 10:26:16.722602  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.44410289s)
	I1213 10:26:16.722635  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.258997514s)
	I1213 10:26:17.255259  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.726660971s)
	I1213 10:26:17.255343  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.694711079s)
	I1213 10:26:18.110755  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.530917898s)
	I1213 10:26:18.111277  357320 addons.go:495] Verifying addon ingress=true in "addons-543946"
	I1213 10:26:18.110872  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.521963683s)
	I1213 10:26:18.110928  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.39562493s)
	I1213 10:26:18.110964  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.271279069s)
	I1213 10:26:18.111407  357320 addons.go:495] Verifying addon registry=true in "addons-543946"
	I1213 10:26:18.110980  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.201769172s)
	I1213 10:26:18.111028  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.092094668s)
	I1213 10:26:18.112052  357320 addons.go:495] Verifying addon metrics-server=true in "addons-543946"
	I1213 10:26:18.111073  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.934244748s)
	I1213 10:26:18.111145  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.860419722s)
	W1213 10:26:18.112159  357320 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 10:26:18.112175  357320 retry.go:31] will retry after 159.73683ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 10:26:18.115767  357320 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-543946 service yakd-dashboard -n yakd-dashboard
	
	I1213 10:26:18.115918  357320 out.go:179] * Verifying registry addon...
	I1213 10:26:18.115969  357320 out.go:179] * Verifying ingress addon...
	I1213 10:26:18.120381  357320 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1213 10:26:18.121299  357320 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1213 10:26:18.133504  357320 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 10:26:18.133524  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:18.134227  357320 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 10:26:18.134245  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:26:18.140907  357320 node_ready.go:57] node "addons-543946" has "Ready":"False" status (will retry)
	I1213 10:26:18.272859  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 10:26:18.461147  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.687889292s)
	I1213 10:26:18.461195  357320 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-543946"
	I1213 10:26:18.464149  357320 out.go:179] * Verifying csi-hostpath-driver addon...
	I1213 10:26:18.467926  357320 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1213 10:26:18.496627  357320 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 10:26:18.496662  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:18.625245  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:18.626592  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:18.698171  357320 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1213 10:26:18.698283  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:18.715174  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:18.832926  357320 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1213 10:26:18.846279  357320 addons.go:239] Setting addon gcp-auth=true in "addons-543946"
	I1213 10:26:18.846329  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:18.846824  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:18.864269  357320 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1213 10:26:18.864326  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:18.880384  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:18.971942  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:19.125013  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:19.125632  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:19.472250  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:19.624396  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:19.624608  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:19.973156  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:20.124367  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:20.124643  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:20.470907  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:20.624904  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:20.625165  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:26:20.638041  357320 node_ready.go:57] node "addons-543946" has "Ready":"False" status (will retry)
	I1213 10:26:20.971860  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:21.002146  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.729232917s)
	I1213 10:26:21.002186  357320 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.137873452s)
	I1213 10:26:21.006851  357320 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 10:26:21.011449  357320 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1213 10:26:21.014716  357320 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1213 10:26:21.014762  357320 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1213 10:26:21.030597  357320 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1213 10:26:21.030666  357320 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1213 10:26:21.045526  357320 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 10:26:21.045548  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1213 10:26:21.059066  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 10:26:21.130145  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:21.130779  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:21.473625  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:21.552078  357320 addons.go:495] Verifying addon gcp-auth=true in "addons-543946"
	I1213 10:26:21.554955  357320 out.go:179] * Verifying gcp-auth addon...
	I1213 10:26:21.559345  357320 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1213 10:26:21.573583  357320 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1213 10:26:21.573606  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:21.674426  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:21.674431  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:21.971187  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:22.063094  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:22.123667  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:22.124564  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:22.471602  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:22.564193  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:22.624335  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:22.624534  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:26:22.638294  357320 node_ready.go:57] node "addons-543946" has "Ready":"False" status (will retry)
	I1213 10:26:22.971838  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:23.062879  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:23.124000  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:23.125150  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:23.473509  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:23.562326  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:23.624179  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:23.624714  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:23.971416  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:24.062533  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:24.123309  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:24.124568  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:24.471631  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:24.563090  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:24.624392  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:24.624761  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1213 10:26:24.638418  357320 node_ready.go:57] node "addons-543946" has "Ready":"False" status (will retry)
	I1213 10:26:24.971270  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:25.063383  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:25.164119  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:25.164449  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:25.473341  357320 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 10:26:25.473365  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:25.618561  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:25.665011  357320 node_ready.go:49] node "addons-543946" is "Ready"
	I1213 10:26:25.665041  357320 node_ready.go:38] duration metric: took 14.030277579s for node "addons-543946" to be "Ready" ...
	I1213 10:26:25.665056  357320 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:26:25.665115  357320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:25.685422  357320 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 10:26:25.685499  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:25.685674  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:25.696630  357320 api_server.go:72] duration metric: took 15.014055801s to wait for apiserver process to appear ...
	I1213 10:26:25.696710  357320 api_server.go:88] waiting for apiserver healthz status ...
	I1213 10:26:25.696746  357320 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1213 10:26:25.728286  357320 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1213 10:26:25.730210  357320 api_server.go:141] control plane version: v1.34.2
	I1213 10:26:25.730279  357320 api_server.go:131] duration metric: took 33.548613ms to wait for apiserver health ...
	I1213 10:26:25.730301  357320 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 10:26:25.789234  357320 system_pods.go:59] 19 kube-system pods found
	I1213 10:26:25.789334  357320 system_pods.go:61] "coredns-66bc5c9577-2h2qj" [d708bfcb-b562-4258-8a02-3496434b9d0f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:26:25.789355  357320 system_pods.go:61] "csi-hostpath-attacher-0" [6945d76f-4774-4e6a-bfe4-9967102c44ae] Pending
	I1213 10:26:25.789402  357320 system_pods.go:61] "csi-hostpath-resizer-0" [b5d0755c-9a98-4a4b-85d6-cde9012ab8d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 10:26:25.789506  357320 system_pods.go:61] "csi-hostpathplugin-j4gkx" [dc459b50-b68a-4fb1-a005-511c1f6644de] Pending
	I1213 10:26:25.789532  357320 system_pods.go:61] "etcd-addons-543946" [4cfdbfb8-1315-4335-a06b-7c55934ebfdd] Running
	I1213 10:26:25.789563  357320 system_pods.go:61] "kindnet-rjdb7" [fa5b3d77-68f9-4360-99ec-936116cfd80b] Running
	I1213 10:26:25.789585  357320 system_pods.go:61] "kube-apiserver-addons-543946" [a3c23361-35df-48b8-a5c2-b2a860c09121] Running
	I1213 10:26:25.789606  357320 system_pods.go:61] "kube-controller-manager-addons-543946" [029b6027-bd2d-4bc1-a36a-a61f7fdc09db] Running
	I1213 10:26:25.789645  357320 system_pods.go:61] "kube-ingress-dns-minikube" [f7b1db38-bdd3-4eec-ac48-c1307ffa281d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 10:26:25.789668  357320 system_pods.go:61] "kube-proxy-cmcs4" [191353c6-fc9a-4820-a0b4-3f621cd4b35b] Running
	I1213 10:26:25.789687  357320 system_pods.go:61] "kube-scheduler-addons-543946" [efaf06b5-70f0-42bf-a215-ad31bbbfd54f] Running
	I1213 10:26:25.789724  357320 system_pods.go:61] "metrics-server-85b7d694d7-h5rdh" [d30aae3c-2ad4-4e72-85a3-0ee845487f8e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 10:26:25.789747  357320 system_pods.go:61] "nvidia-device-plugin-daemonset-8blxf" [3201a02e-fa9a-46d7-9d5d-b5a1c793e01a] Pending
	I1213 10:26:25.789771  357320 system_pods.go:61] "registry-6b586f9694-w4p9x" [faf524b7-f1b3-484a-941d-99c4e0ea1742] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 10:26:25.789815  357320 system_pods.go:61] "registry-creds-764b6fb674-sgjj5" [2e8c14e5-85f0-48f5-95ae-fe51d21ead63] Pending
	I1213 10:26:25.789838  357320 system_pods.go:61] "registry-proxy-rd2tq" [509e84c1-a9e8-47b2-87dc-ce6324a1acdd] Pending
	I1213 10:26:25.789889  357320 system_pods.go:61] "snapshot-controller-7d9fbc56b8-86m6m" [8865a54f-056c-465c-8eb8-be37cc2ffbf1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:26:25.789914  357320 system_pods.go:61] "snapshot-controller-7d9fbc56b8-cf7bx" [e6bdb444-916c-47d2-afc0-8fe2cc811cb9] Pending
	I1213 10:26:25.789934  357320 system_pods.go:61] "storage-provisioner" [7a0ef0b4-dd06-4d44-8931-5ebb6dcc2276] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:26:25.789967  357320 system_pods.go:74] duration metric: took 59.646683ms to wait for pod list to return data ...
	I1213 10:26:25.789994  357320 default_sa.go:34] waiting for default service account to be created ...
	I1213 10:26:25.805909  357320 default_sa.go:45] found service account: "default"
	I1213 10:26:25.805986  357320 default_sa.go:55] duration metric: took 15.970933ms for default service account to be created ...
	I1213 10:26:25.806012  357320 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 10:26:25.848445  357320 system_pods.go:86] 19 kube-system pods found
	I1213 10:26:25.848535  357320 system_pods.go:89] "coredns-66bc5c9577-2h2qj" [d708bfcb-b562-4258-8a02-3496434b9d0f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:26:25.848560  357320 system_pods.go:89] "csi-hostpath-attacher-0" [6945d76f-4774-4e6a-bfe4-9967102c44ae] Pending
	I1213 10:26:25.848603  357320 system_pods.go:89] "csi-hostpath-resizer-0" [b5d0755c-9a98-4a4b-85d6-cde9012ab8d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 10:26:25.848667  357320 system_pods.go:89] "csi-hostpathplugin-j4gkx" [dc459b50-b68a-4fb1-a005-511c1f6644de] Pending
	I1213 10:26:25.848775  357320 system_pods.go:89] "etcd-addons-543946" [4cfdbfb8-1315-4335-a06b-7c55934ebfdd] Running
	I1213 10:26:25.848802  357320 system_pods.go:89] "kindnet-rjdb7" [fa5b3d77-68f9-4360-99ec-936116cfd80b] Running
	I1213 10:26:25.848821  357320 system_pods.go:89] "kube-apiserver-addons-543946" [a3c23361-35df-48b8-a5c2-b2a860c09121] Running
	I1213 10:26:25.848843  357320 system_pods.go:89] "kube-controller-manager-addons-543946" [029b6027-bd2d-4bc1-a36a-a61f7fdc09db] Running
	I1213 10:26:25.848884  357320 system_pods.go:89] "kube-ingress-dns-minikube" [f7b1db38-bdd3-4eec-ac48-c1307ffa281d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 10:26:25.848908  357320 system_pods.go:89] "kube-proxy-cmcs4" [191353c6-fc9a-4820-a0b4-3f621cd4b35b] Running
	I1213 10:26:25.848993  357320 system_pods.go:89] "kube-scheduler-addons-543946" [efaf06b5-70f0-42bf-a215-ad31bbbfd54f] Running
	I1213 10:26:25.849024  357320 system_pods.go:89] "metrics-server-85b7d694d7-h5rdh" [d30aae3c-2ad4-4e72-85a3-0ee845487f8e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 10:26:25.849045  357320 system_pods.go:89] "nvidia-device-plugin-daemonset-8blxf" [3201a02e-fa9a-46d7-9d5d-b5a1c793e01a] Pending
	I1213 10:26:25.849068  357320 system_pods.go:89] "registry-6b586f9694-w4p9x" [faf524b7-f1b3-484a-941d-99c4e0ea1742] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 10:26:25.849103  357320 system_pods.go:89] "registry-creds-764b6fb674-sgjj5" [2e8c14e5-85f0-48f5-95ae-fe51d21ead63] Pending
	I1213 10:26:25.849212  357320 system_pods.go:89] "registry-proxy-rd2tq" [509e84c1-a9e8-47b2-87dc-ce6324a1acdd] Pending
	I1213 10:26:25.849241  357320 system_pods.go:89] "snapshot-controller-7d9fbc56b8-86m6m" [8865a54f-056c-465c-8eb8-be37cc2ffbf1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:26:25.849263  357320 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cf7bx" [e6bdb444-916c-47d2-afc0-8fe2cc811cb9] Pending
	I1213 10:26:25.849291  357320 system_pods.go:89] "storage-provisioner" [7a0ef0b4-dd06-4d44-8931-5ebb6dcc2276] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:26:25.849334  357320 retry.go:31] will retry after 231.460689ms: missing components: kube-dns
	I1213 10:26:25.982707  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:26.064949  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:26.098503  357320 system_pods.go:86] 19 kube-system pods found
	I1213 10:26:26.098595  357320 system_pods.go:89] "coredns-66bc5c9577-2h2qj" [d708bfcb-b562-4258-8a02-3496434b9d0f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:26:26.098620  357320 system_pods.go:89] "csi-hostpath-attacher-0" [6945d76f-4774-4e6a-bfe4-9967102c44ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 10:26:26.098663  357320 system_pods.go:89] "csi-hostpath-resizer-0" [b5d0755c-9a98-4a4b-85d6-cde9012ab8d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 10:26:26.098689  357320 system_pods.go:89] "csi-hostpathplugin-j4gkx" [dc459b50-b68a-4fb1-a005-511c1f6644de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 10:26:26.098710  357320 system_pods.go:89] "etcd-addons-543946" [4cfdbfb8-1315-4335-a06b-7c55934ebfdd] Running
	I1213 10:26:26.098749  357320 system_pods.go:89] "kindnet-rjdb7" [fa5b3d77-68f9-4360-99ec-936116cfd80b] Running
	I1213 10:26:26.098774  357320 system_pods.go:89] "kube-apiserver-addons-543946" [a3c23361-35df-48b8-a5c2-b2a860c09121] Running
	I1213 10:26:26.098795  357320 system_pods.go:89] "kube-controller-manager-addons-543946" [029b6027-bd2d-4bc1-a36a-a61f7fdc09db] Running
	I1213 10:26:26.098835  357320 system_pods.go:89] "kube-ingress-dns-minikube" [f7b1db38-bdd3-4eec-ac48-c1307ffa281d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 10:26:26.098871  357320 system_pods.go:89] "kube-proxy-cmcs4" [191353c6-fc9a-4820-a0b4-3f621cd4b35b] Running
	I1213 10:26:26.098893  357320 system_pods.go:89] "kube-scheduler-addons-543946" [efaf06b5-70f0-42bf-a215-ad31bbbfd54f] Running
	I1213 10:26:26.098931  357320 system_pods.go:89] "metrics-server-85b7d694d7-h5rdh" [d30aae3c-2ad4-4e72-85a3-0ee845487f8e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 10:26:26.098959  357320 system_pods.go:89] "nvidia-device-plugin-daemonset-8blxf" [3201a02e-fa9a-46d7-9d5d-b5a1c793e01a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 10:26:26.098996  357320 system_pods.go:89] "registry-6b586f9694-w4p9x" [faf524b7-f1b3-484a-941d-99c4e0ea1742] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 10:26:26.099024  357320 system_pods.go:89] "registry-creds-764b6fb674-sgjj5" [2e8c14e5-85f0-48f5-95ae-fe51d21ead63] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 10:26:26.099048  357320 system_pods.go:89] "registry-proxy-rd2tq" [509e84c1-a9e8-47b2-87dc-ce6324a1acdd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 10:26:26.099083  357320 system_pods.go:89] "snapshot-controller-7d9fbc56b8-86m6m" [8865a54f-056c-465c-8eb8-be37cc2ffbf1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:26:26.099114  357320 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cf7bx" [e6bdb444-916c-47d2-afc0-8fe2cc811cb9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:26:26.099136  357320 system_pods.go:89] "storage-provisioner" [7a0ef0b4-dd06-4d44-8931-5ebb6dcc2276] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:26:26.099181  357320 retry.go:31] will retry after 299.980132ms: missing components: kube-dns
	I1213 10:26:26.130032  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:26.130376  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:26.406620  357320 system_pods.go:86] 19 kube-system pods found
	I1213 10:26:26.406706  357320 system_pods.go:89] "coredns-66bc5c9577-2h2qj" [d708bfcb-b562-4258-8a02-3496434b9d0f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:26:26.406731  357320 system_pods.go:89] "csi-hostpath-attacher-0" [6945d76f-4774-4e6a-bfe4-9967102c44ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 10:26:26.406773  357320 system_pods.go:89] "csi-hostpath-resizer-0" [b5d0755c-9a98-4a4b-85d6-cde9012ab8d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 10:26:26.406801  357320 system_pods.go:89] "csi-hostpathplugin-j4gkx" [dc459b50-b68a-4fb1-a005-511c1f6644de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 10:26:26.406824  357320 system_pods.go:89] "etcd-addons-543946" [4cfdbfb8-1315-4335-a06b-7c55934ebfdd] Running
	I1213 10:26:26.406861  357320 system_pods.go:89] "kindnet-rjdb7" [fa5b3d77-68f9-4360-99ec-936116cfd80b] Running
	I1213 10:26:26.406888  357320 system_pods.go:89] "kube-apiserver-addons-543946" [a3c23361-35df-48b8-a5c2-b2a860c09121] Running
	I1213 10:26:26.406908  357320 system_pods.go:89] "kube-controller-manager-addons-543946" [029b6027-bd2d-4bc1-a36a-a61f7fdc09db] Running
	I1213 10:26:26.406957  357320 system_pods.go:89] "kube-ingress-dns-minikube" [f7b1db38-bdd3-4eec-ac48-c1307ffa281d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 10:26:26.406982  357320 system_pods.go:89] "kube-proxy-cmcs4" [191353c6-fc9a-4820-a0b4-3f621cd4b35b] Running
	I1213 10:26:26.407005  357320 system_pods.go:89] "kube-scheduler-addons-543946" [efaf06b5-70f0-42bf-a215-ad31bbbfd54f] Running
	I1213 10:26:26.407039  357320 system_pods.go:89] "metrics-server-85b7d694d7-h5rdh" [d30aae3c-2ad4-4e72-85a3-0ee845487f8e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 10:26:26.407067  357320 system_pods.go:89] "nvidia-device-plugin-daemonset-8blxf" [3201a02e-fa9a-46d7-9d5d-b5a1c793e01a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 10:26:26.407088  357320 system_pods.go:89] "registry-6b586f9694-w4p9x" [faf524b7-f1b3-484a-941d-99c4e0ea1742] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 10:26:26.407126  357320 system_pods.go:89] "registry-creds-764b6fb674-sgjj5" [2e8c14e5-85f0-48f5-95ae-fe51d21ead63] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 10:26:26.407153  357320 system_pods.go:89] "registry-proxy-rd2tq" [509e84c1-a9e8-47b2-87dc-ce6324a1acdd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 10:26:26.407175  357320 system_pods.go:89] "snapshot-controller-7d9fbc56b8-86m6m" [8865a54f-056c-465c-8eb8-be37cc2ffbf1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:26:26.407214  357320 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cf7bx" [e6bdb444-916c-47d2-afc0-8fe2cc811cb9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:26:26.407240  357320 system_pods.go:89] "storage-provisioner" [7a0ef0b4-dd06-4d44-8931-5ebb6dcc2276] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:26:26.407270  357320 retry.go:31] will retry after 417.018213ms: missing components: kube-dns
	I1213 10:26:26.472259  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:26.563093  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:26.625066  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:26.625169  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:26.831079  357320 system_pods.go:86] 19 kube-system pods found
	I1213 10:26:26.831113  357320 system_pods.go:89] "coredns-66bc5c9577-2h2qj" [d708bfcb-b562-4258-8a02-3496434b9d0f] Running
	I1213 10:26:26.831124  357320 system_pods.go:89] "csi-hostpath-attacher-0" [6945d76f-4774-4e6a-bfe4-9967102c44ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 10:26:26.831131  357320 system_pods.go:89] "csi-hostpath-resizer-0" [b5d0755c-9a98-4a4b-85d6-cde9012ab8d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 10:26:26.831140  357320 system_pods.go:89] "csi-hostpathplugin-j4gkx" [dc459b50-b68a-4fb1-a005-511c1f6644de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 10:26:26.831146  357320 system_pods.go:89] "etcd-addons-543946" [4cfdbfb8-1315-4335-a06b-7c55934ebfdd] Running
	I1213 10:26:26.831151  357320 system_pods.go:89] "kindnet-rjdb7" [fa5b3d77-68f9-4360-99ec-936116cfd80b] Running
	I1213 10:26:26.831161  357320 system_pods.go:89] "kube-apiserver-addons-543946" [a3c23361-35df-48b8-a5c2-b2a860c09121] Running
	I1213 10:26:26.831166  357320 system_pods.go:89] "kube-controller-manager-addons-543946" [029b6027-bd2d-4bc1-a36a-a61f7fdc09db] Running
	I1213 10:26:26.831176  357320 system_pods.go:89] "kube-ingress-dns-minikube" [f7b1db38-bdd3-4eec-ac48-c1307ffa281d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 10:26:26.831181  357320 system_pods.go:89] "kube-proxy-cmcs4" [191353c6-fc9a-4820-a0b4-3f621cd4b35b] Running
	I1213 10:26:26.831191  357320 system_pods.go:89] "kube-scheduler-addons-543946" [efaf06b5-70f0-42bf-a215-ad31bbbfd54f] Running
	I1213 10:26:26.831198  357320 system_pods.go:89] "metrics-server-85b7d694d7-h5rdh" [d30aae3c-2ad4-4e72-85a3-0ee845487f8e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 10:26:26.831208  357320 system_pods.go:89] "nvidia-device-plugin-daemonset-8blxf" [3201a02e-fa9a-46d7-9d5d-b5a1c793e01a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 10:26:26.831214  357320 system_pods.go:89] "registry-6b586f9694-w4p9x" [faf524b7-f1b3-484a-941d-99c4e0ea1742] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 10:26:26.831222  357320 system_pods.go:89] "registry-creds-764b6fb674-sgjj5" [2e8c14e5-85f0-48f5-95ae-fe51d21ead63] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 10:26:26.831230  357320 system_pods.go:89] "registry-proxy-rd2tq" [509e84c1-a9e8-47b2-87dc-ce6324a1acdd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 10:26:26.831237  357320 system_pods.go:89] "snapshot-controller-7d9fbc56b8-86m6m" [8865a54f-056c-465c-8eb8-be37cc2ffbf1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:26:26.831243  357320 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cf7bx" [e6bdb444-916c-47d2-afc0-8fe2cc811cb9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:26:26.831248  357320 system_pods.go:89] "storage-provisioner" [7a0ef0b4-dd06-4d44-8931-5ebb6dcc2276] Running
	I1213 10:26:26.831259  357320 system_pods.go:126] duration metric: took 1.02522791s to wait for k8s-apps to be running ...
	I1213 10:26:26.831271  357320 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 10:26:26.831326  357320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:26:26.847578  357320 system_svc.go:56] duration metric: took 16.298986ms WaitForService to wait for kubelet
	I1213 10:26:26.847609  357320 kubeadm.go:587] duration metric: took 16.165040106s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:26:26.847630  357320 node_conditions.go:102] verifying NodePressure condition ...
	I1213 10:26:26.851586  357320 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 10:26:26.851671  357320 node_conditions.go:123] node cpu capacity is 2
	I1213 10:26:26.851701  357320 node_conditions.go:105] duration metric: took 4.065235ms to run NodePressure ...
	I1213 10:26:26.851743  357320 start.go:242] waiting for startup goroutines ...
	I1213 10:26:26.972082  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:27.063044  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:27.128232  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:27.128336  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:27.472039  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:27.563583  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:27.625254  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:27.625383  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:27.972214  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:28.063028  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:28.124427  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:28.124597  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:28.471579  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:28.562511  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:28.625065  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:28.625694  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:28.971433  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:29.063474  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:29.124905  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:29.125648  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:29.476949  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:29.576904  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:29.624356  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:29.624961  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:29.981669  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:30.074396  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:30.126479  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:30.126882  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:30.471741  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:30.562705  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:30.624071  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:30.626819  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:30.972210  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:31.063496  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:31.124103  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:31.127143  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:31.472192  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:31.563278  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:31.625724  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:31.625836  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:31.970808  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:32.062718  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:32.123483  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:32.125873  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:32.472378  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:32.562299  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:32.626218  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:32.626466  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:32.971925  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:33.062940  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:33.126354  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:33.126488  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:33.472404  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:33.562425  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:33.625435  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:33.625569  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:33.972541  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:34.062859  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:34.124985  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:34.126900  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:34.472387  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:34.572629  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:34.625405  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:34.625618  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:34.971281  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:35.062048  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:35.125832  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:35.126051  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:35.470924  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:35.562805  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:35.625227  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:35.625772  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:35.974146  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:36.063579  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:36.124298  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:36.126086  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:36.471941  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:36.563071  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:36.625639  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:36.625881  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:36.972162  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:37.062913  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:37.125199  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:37.126016  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:37.472773  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:37.563066  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:37.636806  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:37.637268  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:37.972471  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:38.064095  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:38.125771  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:38.126437  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:38.474012  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:38.564369  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:38.623814  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:38.626709  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:38.971598  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:39.062729  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:39.125003  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:39.126270  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:39.471399  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:39.562655  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:39.623954  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:39.624487  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:39.973954  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:40.063363  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:40.122964  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:40.124562  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:40.472938  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:40.564757  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:40.626251  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:40.626687  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:40.972275  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:41.075434  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:41.197717  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:41.197878  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:41.471576  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:41.562716  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:41.623621  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:41.624616  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:41.971305  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:42.062522  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:42.125102  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:42.125597  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:42.472432  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:42.562911  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:42.625033  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:42.626769  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:42.971464  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:43.062988  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:43.125821  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:43.126918  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:43.471856  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:43.563191  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:43.626157  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:43.626614  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:43.971981  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:44.063375  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:44.124591  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:44.124959  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:44.472224  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:44.572695  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:44.624107  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:44.625545  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:44.971248  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:45.063259  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:45.127866  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:45.128064  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:45.471720  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:45.562409  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:45.625381  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:45.625535  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:45.972810  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:46.063014  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:46.125987  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:46.126379  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:46.472348  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:46.562636  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:46.625755  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:46.626607  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:46.972439  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:47.064061  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:47.125339  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:47.125487  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:47.472319  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:47.562602  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:47.623143  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:47.624629  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:47.971224  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:48.062115  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:48.124595  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:48.124760  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:48.472555  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:48.562616  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:48.625775  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:48.626822  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:48.970908  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:49.063089  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:49.125141  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:49.125319  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:49.471813  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:49.563006  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:49.625047  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:49.625199  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:49.981978  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:50.065199  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:50.126240  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:50.126422  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:50.475459  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:50.576644  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:50.676839  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:50.677372  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:50.973790  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:51.063084  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:51.129145  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:51.129800  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:51.471999  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:51.563363  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:51.626048  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:51.626468  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:51.971861  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:52.062858  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:52.126305  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:52.126917  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:52.473669  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:52.563320  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:52.625941  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:52.626167  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:52.977318  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:53.062758  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:53.124117  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:53.124784  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:53.471068  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:53.563108  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:53.625776  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:53.626173  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:53.971927  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:54.063058  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:54.133121  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:54.133143  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:54.473103  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:54.574145  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:54.677646  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:54.678031  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:54.970954  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:55.062898  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:55.124672  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:55.124916  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:55.471094  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:55.563030  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:55.623724  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:55.624422  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:55.976604  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:56.063028  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:56.124511  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:56.125959  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:56.472865  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:56.562961  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:56.625816  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:56.626291  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:56.973527  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:57.062515  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:57.124771  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:57.126603  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:57.472723  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:57.562729  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:57.626061  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:57.626421  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:57.972224  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:58.063536  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:58.124557  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:58.126335  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:58.474211  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:58.563211  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:58.626121  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:58.626332  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:58.973309  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:59.063899  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:59.125900  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:59.126287  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:59.472331  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:59.562352  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:59.624523  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:59.625667  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:59.975780  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:00.074661  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:00.156620  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:00.157143  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:27:00.472352  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:00.562068  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:00.625671  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:00.625866  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:27:00.971757  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:01.062876  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:01.124366  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:27:01.125914  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:01.471566  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:01.562426  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:01.623873  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:27:01.625160  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:02.005131  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:02.063274  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:02.124463  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:02.124601  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:27:02.471058  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:02.563424  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:02.624529  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:27:02.625532  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:02.971924  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:03.063012  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:03.126220  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:03.126362  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:27:03.472380  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:03.562457  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:03.624830  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:27:03.625593  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:03.972310  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:04.063629  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:04.128380  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:04.128576  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:27:04.470837  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:04.567357  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:04.624696  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:27:04.624956  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:04.976956  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:05.074333  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:05.124931  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:05.125714  357320 kapi.go:107] duration metric: took 47.005334302s to wait for kubernetes.io/minikube-addons=registry ...
	I1213 10:27:05.472035  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:05.563162  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:05.625410  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:05.971974  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:06.063663  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:06.125156  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:06.472359  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:06.572417  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:06.624681  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:06.972661  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:07.062806  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:07.125043  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:07.471222  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:07.565876  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:07.625945  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:07.972041  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:08.063145  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:08.125905  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:08.471747  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:08.571891  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:08.631964  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:08.971428  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:09.063676  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:09.125111  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:09.472529  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:09.565787  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:09.626830  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:09.973357  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:10.062738  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:10.125102  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:10.471962  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:10.563539  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:10.624664  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:10.972511  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:11.064751  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:11.126169  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:11.474080  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:11.575832  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:11.624452  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:11.972169  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:12.063627  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:12.135018  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:12.471565  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:12.572501  357320 kapi.go:107] duration metric: took 51.013158445s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1213 10:27:12.575976  357320 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-543946 cluster.
	I1213 10:27:12.579630  357320 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1213 10:27:12.582867  357320 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1213 10:27:12.624662  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:12.971185  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:13.124369  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:13.475822  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:13.631601  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:13.972069  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:14.129382  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:14.472767  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:14.625884  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:14.971828  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:15.126120  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:15.472509  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:15.624174  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:15.972129  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:16.125634  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:16.471655  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:16.625351  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:16.972278  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:17.125019  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:17.471214  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:17.625316  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:17.971898  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:18.132928  357320 kapi.go:107] duration metric: took 1m0.011625722s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1213 10:27:18.471889  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:18.973182  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:19.548074  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:19.973877  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:20.472245  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:20.972127  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:21.472749  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:21.971906  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:22.471763  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:22.973183  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:23.471906  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:23.971655  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:24.471117  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:24.971997  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:25.472007  357320 kapi.go:107] duration metric: took 1m7.004084489s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1213 10:27:25.475073  357320 out.go:179] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner-rancher, inspektor-gadget, amd-gpu-device-plugin, registry-creds, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1213 10:27:25.477943  357320 addons.go:530] duration metric: took 1m14.794927144s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner-rancher inspektor-gadget amd-gpu-device-plugin registry-creds storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1213 10:27:25.478006  357320 start.go:247] waiting for cluster config update ...
	I1213 10:27:25.478029  357320 start.go:256] writing updated cluster config ...
	I1213 10:27:25.478357  357320 ssh_runner.go:195] Run: rm -f paused
	I1213 10:27:25.482980  357320 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 10:27:25.486505  357320 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2h2qj" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:25.491276  357320 pod_ready.go:94] pod "coredns-66bc5c9577-2h2qj" is "Ready"
	I1213 10:27:25.491304  357320 pod_ready.go:86] duration metric: took 4.775626ms for pod "coredns-66bc5c9577-2h2qj" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:25.493590  357320 pod_ready.go:83] waiting for pod "etcd-addons-543946" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:25.498029  357320 pod_ready.go:94] pod "etcd-addons-543946" is "Ready"
	I1213 10:27:25.498055  357320 pod_ready.go:86] duration metric: took 4.44187ms for pod "etcd-addons-543946" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:25.500498  357320 pod_ready.go:83] waiting for pod "kube-apiserver-addons-543946" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:25.505135  357320 pod_ready.go:94] pod "kube-apiserver-addons-543946" is "Ready"
	I1213 10:27:25.505211  357320 pod_ready.go:86] duration metric: took 4.67855ms for pod "kube-apiserver-addons-543946" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:25.508057  357320 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-543946" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:25.887091  357320 pod_ready.go:94] pod "kube-controller-manager-addons-543946" is "Ready"
	I1213 10:27:25.887178  357320 pod_ready.go:86] duration metric: took 379.092479ms for pod "kube-controller-manager-addons-543946" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:26.087702  357320 pod_ready.go:83] waiting for pod "kube-proxy-cmcs4" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:26.486824  357320 pod_ready.go:94] pod "kube-proxy-cmcs4" is "Ready"
	I1213 10:27:26.486850  357320 pod_ready.go:86] duration metric: took 399.118554ms for pod "kube-proxy-cmcs4" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:26.687397  357320 pod_ready.go:83] waiting for pod "kube-scheduler-addons-543946" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:27.087449  357320 pod_ready.go:94] pod "kube-scheduler-addons-543946" is "Ready"
	I1213 10:27:27.087480  357320 pod_ready.go:86] duration metric: took 400.054671ms for pod "kube-scheduler-addons-543946" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:27.087494  357320 pod_ready.go:40] duration metric: took 1.604478668s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 10:27:27.152045  357320 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1213 10:27:27.155268  357320 out.go:179] * Done! kubectl is now configured to use "addons-543946" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 10:29:05 addons-543946 crio[830]: time="2025-12-13T10:29:05.557246935Z" level=info msg="Removed pod sandbox: cc90cf5870f1a9780b92b4cb5857d5378ae3325efc79fcbcf4f7bef4e64a2016" id=a0c00aae-4213-48a8-b3b8-a3538863b5c7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 10:30:28 addons-543946 crio[830]: time="2025-12-13T10:30:28.668822766Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-sxxkb/POD" id=920dce42-fe7c-49fb-b76d-e9b952c4a91c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 10:30:28 addons-543946 crio[830]: time="2025-12-13T10:30:28.668897877Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 10:30:28 addons-543946 crio[830]: time="2025-12-13T10:30:28.680159522Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-sxxkb Namespace:default ID:cb09d7e009f6c8d13523e611a2a254c683c530995a18e3e33f97b86233e66194 UID:d44d14f0-2cbc-4e09-b5d6-2fad4ea095b4 NetNS:/var/run/netns/f690fadf-90ad-48a6-8b94-161fc994bf0d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001514df8}] Aliases:map[]}"
	Dec 13 10:30:28 addons-543946 crio[830]: time="2025-12-13T10:30:28.680203174Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-sxxkb to CNI network \"kindnet\" (type=ptp)"
	Dec 13 10:30:28 addons-543946 crio[830]: time="2025-12-13T10:30:28.693006071Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-sxxkb Namespace:default ID:cb09d7e009f6c8d13523e611a2a254c683c530995a18e3e33f97b86233e66194 UID:d44d14f0-2cbc-4e09-b5d6-2fad4ea095b4 NetNS:/var/run/netns/f690fadf-90ad-48a6-8b94-161fc994bf0d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001514df8}] Aliases:map[]}"
	Dec 13 10:30:28 addons-543946 crio[830]: time="2025-12-13T10:30:28.693304676Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-sxxkb for CNI network kindnet (type=ptp)"
	Dec 13 10:30:28 addons-543946 crio[830]: time="2025-12-13T10:30:28.702120567Z" level=info msg="Ran pod sandbox cb09d7e009f6c8d13523e611a2a254c683c530995a18e3e33f97b86233e66194 with infra container: default/hello-world-app-5d498dc89-sxxkb/POD" id=920dce42-fe7c-49fb-b76d-e9b952c4a91c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 10:30:28 addons-543946 crio[830]: time="2025-12-13T10:30:28.704172151Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=1188ad05-7c08-4305-864d-9d0db1f6fbe2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:30:28 addons-543946 crio[830]: time="2025-12-13T10:30:28.704614404Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=1188ad05-7c08-4305-864d-9d0db1f6fbe2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:30:28 addons-543946 crio[830]: time="2025-12-13T10:30:28.70473795Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=1188ad05-7c08-4305-864d-9d0db1f6fbe2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:30:28 addons-543946 crio[830]: time="2025-12-13T10:30:28.705759934Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=94fba36f-f699-4284-9cc3-7073af9bbae3 name=/runtime.v1.ImageService/PullImage
	Dec 13 10:30:28 addons-543946 crio[830]: time="2025-12-13T10:30:28.708829621Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 13 10:30:29 addons-543946 crio[830]: time="2025-12-13T10:30:29.341923978Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=94fba36f-f699-4284-9cc3-7073af9bbae3 name=/runtime.v1.ImageService/PullImage
	Dec 13 10:30:29 addons-543946 crio[830]: time="2025-12-13T10:30:29.342746223Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=32c48fba-8996-48c4-b3e1-3447489956bc name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:30:29 addons-543946 crio[830]: time="2025-12-13T10:30:29.346864569Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=53c6603c-aacd-4c70-a3f4-100bbd2308ce name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:30:29 addons-543946 crio[830]: time="2025-12-13T10:30:29.356034516Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-sxxkb/hello-world-app" id=34632122-a42a-46a1-ac16-076fd7bd0bbe name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 10:30:29 addons-543946 crio[830]: time="2025-12-13T10:30:29.356169123Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 10:30:29 addons-543946 crio[830]: time="2025-12-13T10:30:29.365376567Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 10:30:29 addons-543946 crio[830]: time="2025-12-13T10:30:29.365702601Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b5ecd63ad95e7882d6f7da2cd740086f13054c8f4ab47ce380602de176e4467d/merged/etc/passwd: no such file or directory"
	Dec 13 10:30:29 addons-543946 crio[830]: time="2025-12-13T10:30:29.365791587Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b5ecd63ad95e7882d6f7da2cd740086f13054c8f4ab47ce380602de176e4467d/merged/etc/group: no such file or directory"
	Dec 13 10:30:29 addons-543946 crio[830]: time="2025-12-13T10:30:29.366787438Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 10:30:29 addons-543946 crio[830]: time="2025-12-13T10:30:29.386311517Z" level=info msg="Created container 79c13ecf7bcc9583a64a852809127bccc53580a03f4e9d0b4cf94124e7b63eae: default/hello-world-app-5d498dc89-sxxkb/hello-world-app" id=34632122-a42a-46a1-ac16-076fd7bd0bbe name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 10:30:29 addons-543946 crio[830]: time="2025-12-13T10:30:29.387662744Z" level=info msg="Starting container: 79c13ecf7bcc9583a64a852809127bccc53580a03f4e9d0b4cf94124e7b63eae" id=8ee07451-95a5-4d6e-a05d-7a46b05f623e name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 10:30:29 addons-543946 crio[830]: time="2025-12-13T10:30:29.402610797Z" level=info msg="Started container" PID=7050 containerID=79c13ecf7bcc9583a64a852809127bccc53580a03f4e9d0b4cf94124e7b63eae description=default/hello-world-app-5d498dc89-sxxkb/hello-world-app id=8ee07451-95a5-4d6e-a05d-7a46b05f623e name=/runtime.v1.RuntimeService/StartContainer sandboxID=cb09d7e009f6c8d13523e611a2a254c683c530995a18e3e33f97b86233e66194
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	79c13ecf7bcc9       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   cb09d7e009f6c       hello-world-app-5d498dc89-sxxkb             default
	6dd1a7a1bedba       public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d                                           2 minutes ago            Running             nginx                                    0                   15b54eaeab6e2       nginx                                       default
	a131b0e592dd6       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   93deca5b2751b       busybox                                     default
	9773f1dabc6fd       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   01495111ea3d8       csi-hostpathplugin-j4gkx                    kube-system
	ad42e673ec298       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   01495111ea3d8       csi-hostpathplugin-j4gkx                    kube-system
	91959e0b37017       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   01495111ea3d8       csi-hostpathplugin-j4gkx                    kube-system
	ade5c570c4dbe       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   01495111ea3d8       csi-hostpathplugin-j4gkx                    kube-system
	67f7b897bbe36       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             3 minutes ago            Exited              patch                                    3                   e27910df99afe       ingress-nginx-admission-patch-qvvht         ingress-nginx
	7710a35bda17f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   01495111ea3d8       csi-hostpathplugin-j4gkx                    kube-system
	04dab4e403d9a       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             3 minutes ago            Running             controller                               0                   a88a6a078e4de       ingress-nginx-controller-85d4c799dd-pdrq4   ingress-nginx
	729dc85a7df2d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   dfd0b66e71367       gcp-auth-78565c9fb4-2rxfg                   gcp-auth
	8dc3964679adf       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            3 minutes ago            Running             gadget                                   0                   592205e009044       gadget-lqcbm                                gadget
	2cc901f4d3fb0       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   892674206af28       registry-proxy-rd2tq                        kube-system
	ead471b4c6339       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   01495111ea3d8       csi-hostpathplugin-j4gkx                    kube-system
	6df4509cdc61a       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   3691fbab0a6c9       yakd-dashboard-5ff678cb9-gjksm              yakd-dashboard
	cd17aa42a109a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   3 minutes ago            Exited              create                                   0                   b1573cd84e28a       ingress-nginx-admission-create-5dcld        ingress-nginx
	59d844a8a4aed       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   7f20b7157a6b1       nvidia-device-plugin-daemonset-8blxf        kube-system
	0bcd4d507bd4a       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   2fbaec37eb8e2       snapshot-controller-7d9fbc56b8-86m6m        kube-system
	9df3579774fb7       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   e650d144045a4       metrics-server-85b7d694d7-h5rdh             kube-system
	7e758bd7d4de4       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   69fe6e0cdd6f1       local-path-provisioner-648f6765c9-569sb     local-path-storage
	f8f4b2d0d0ca0       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   54a608455bb53       csi-hostpath-attacher-0                     kube-system
	ea7feaaedcdea       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               3 minutes ago            Running             cloud-spanner-emulator                   0                   b7f5bed676861       cloud-spanner-emulator-5bdddb765-8bzlp      default
	82ec4f0d27393       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   8ee024ae3484e       snapshot-controller-7d9fbc56b8-cf7bx        kube-system
	46a1e5bc68671       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   170bd2c3b5b7b       csi-hostpath-resizer-0                      kube-system
	abd7fd4640572       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   63461495bf3c3       kube-ingress-dns-minikube                   kube-system
	ca06350334c82       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   4126a8b59b290       registry-6b586f9694-w4p9x                   kube-system
	d5c5cc43186b7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   f4c514b6a01ed       coredns-66bc5c9577-2h2qj                    kube-system
	cc0f178df84bb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   74c060477a7a7       storage-provisioner                         kube-system
	40451bec4cc26       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3                                           4 minutes ago            Running             kindnet-cni                              0                   3d5fbe8de519f       kindnet-rjdb7                               kube-system
	ca309cac66452       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             4 minutes ago            Running             kube-proxy                               0                   d9d99089c50e3       kube-proxy-cmcs4                            kube-system
	051e9f414ee6e       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             4 minutes ago            Running             kube-apiserver                           0                   4778db1696aa5       kube-apiserver-addons-543946                kube-system
	cf448155f622a       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             4 minutes ago            Running             etcd                                     0                   dc0a116289aa9       etcd-addons-543946                          kube-system
	76b7938d7fbe3       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             4 minutes ago            Running             kube-controller-manager                  0                   bd558ee45801a       kube-controller-manager-addons-543946       kube-system
	fe380896f1e4d       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             4 minutes ago            Running             kube-scheduler                           0                   bfd8a5a5bcb91       kube-scheduler-addons-543946                kube-system
	
	
	==> coredns [d5c5cc43186b72930aa32f3cf24a96d8bf357cebf0358db4413edf761499d0af] <==
	[INFO] 10.244.0.14:48083 - 7998 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00177451s
	[INFO] 10.244.0.14:48083 - 1424 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000102089s
	[INFO] 10.244.0.14:48083 - 23560 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000081674s
	[INFO] 10.244.0.14:45654 - 14408 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000159813s
	[INFO] 10.244.0.14:45654 - 14211 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000285517s
	[INFO] 10.244.0.14:41934 - 22925 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000137709s
	[INFO] 10.244.0.14:41934 - 22753 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000131669s
	[INFO] 10.244.0.14:53966 - 9160 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00011068s
	[INFO] 10.244.0.14:53966 - 8706 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099004s
	[INFO] 10.244.0.14:59941 - 37056 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001922811s
	[INFO] 10.244.0.14:59941 - 37245 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002346783s
	[INFO] 10.244.0.14:45730 - 41236 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000150418s
	[INFO] 10.244.0.14:45730 - 41023 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000192412s
	[INFO] 10.244.0.21:39173 - 41471 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000176511s
	[INFO] 10.244.0.21:54349 - 3361 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00008933s
	[INFO] 10.244.0.21:52404 - 19122 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000183157s
	[INFO] 10.244.0.21:41724 - 62930 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000093728s
	[INFO] 10.244.0.21:46606 - 37887 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00013957s
	[INFO] 10.244.0.21:50588 - 25114 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000084981s
	[INFO] 10.244.0.21:52007 - 62136 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004125912s
	[INFO] 10.244.0.21:45875 - 16367 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004665151s
	[INFO] 10.244.0.21:58335 - 43326 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001088496s
	[INFO] 10.244.0.21:55746 - 52027 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.008003313s
	[INFO] 10.244.0.23:46043 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000174599s
	[INFO] 10.244.0.23:55450 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000130323s
	
	
	==> describe nodes <==
	Name:               addons-543946
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-543946
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
	                    minikube.k8s.io/name=addons-543946
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T10_26_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-543946
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-543946"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 10:26:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-543946
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 10:30:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 10:30:10 +0000   Sat, 13 Dec 2025 10:25:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 10:30:10 +0000   Sat, 13 Dec 2025 10:25:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 10:30:10 +0000   Sat, 13 Dec 2025 10:25:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 10:30:10 +0000   Sat, 13 Dec 2025 10:26:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-543946
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                e1eb433e-9ee9-4616-8513-68821455500a
	  Boot ID:                    9bd24839-35d9-4392-a0e0-b2e0b9823eaa
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  default                     cloud-spanner-emulator-5bdddb765-8bzlp       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  default                     hello-world-app-5d498dc89-sxxkb              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-lqcbm                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  gcp-auth                    gcp-auth-78565c9fb4-2rxfg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-pdrq4    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m12s
	  kube-system                 coredns-66bc5c9577-2h2qj                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m19s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 csi-hostpathplugin-j4gkx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 etcd-addons-543946                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m25s
	  kube-system                 kindnet-rjdb7                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m20s
	  kube-system                 kube-apiserver-addons-543946                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-controller-manager-addons-543946        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-proxy-cmcs4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-scheduler-addons-543946                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 metrics-server-85b7d694d7-h5rdh              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m13s
	  kube-system                 nvidia-device-plugin-daemonset-8blxf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 registry-6b586f9694-w4p9x                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 registry-creds-764b6fb674-sgjj5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 registry-proxy-rd2tq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 snapshot-controller-7d9fbc56b8-86m6m         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 snapshot-controller-7d9fbc56b8-cf7bx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  local-path-storage          local-path-provisioner-648f6765c9-569sb      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-gjksm               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m18s                  kube-proxy       
	  Warning  CgroupV1                 4m32s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m32s (x8 over 4m32s)  kubelet          Node addons-543946 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m32s (x8 over 4m32s)  kubelet          Node addons-543946 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m32s (x8 over 4m32s)  kubelet          Node addons-543946 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m25s                  kubelet          Node addons-543946 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m25s                  kubelet          Node addons-543946 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m25s                  kubelet          Node addons-543946 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m21s                  node-controller  Node addons-543946 event: Registered Node addons-543946 in Controller
	  Normal   NodeReady                4m5s                   kubelet          Node addons-543946 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	[Dec13 10:24] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:25] overlayfs: idmapped layers are currently not supported
	[  +0.081323] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [cf448155f622a981b7e826d6176c5a79e1d0b75a3b353485a3e5065aa49ad951] <==
	{"level":"warn","ts":"2025-12-13T10:26:01.643376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.666500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.682630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.704910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.717615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.740022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.751569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.797967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.798635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.816005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.870248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.895624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.911762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:02.015596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48816","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T10:26:11.342784Z","caller":"traceutil/trace.go:172","msg":"trace[626553572] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"111.581256ms","start":"2025-12-13T10:26:11.231168Z","end":"2025-12-13T10:26:11.342749Z","steps":["trace[626553572] 'process raft request'  (duration: 99.173332ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T10:26:11.343951Z","caller":"traceutil/trace.go:172","msg":"trace[1998081431] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"112.323033ms","start":"2025-12-13T10:26:11.231610Z","end":"2025-12-13T10:26:11.343933Z","steps":["trace[1998081431] 'process raft request'  (duration: 98.865875ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T10:26:11.414708Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.332837ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T10:26:11.414780Z","caller":"traceutil/trace.go:172","msg":"trace[1040673509] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:376; }","duration":"109.417105ms","start":"2025-12-13T10:26:11.305350Z","end":"2025-12-13T10:26:11.414767Z","steps":["trace[1040673509] 'agreement among raft nodes before linearized reading'  (duration: 39.833179ms)","trace[1040673509] 'range keys from in-memory index tree'  (duration: 69.481402ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T10:26:11.420323Z","caller":"traceutil/trace.go:172","msg":"trace[648221834] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"114.35942ms","start":"2025-12-13T10:26:11.305939Z","end":"2025-12-13T10:26:11.420298Z","steps":["trace[648221834] 'process raft request'  (duration: 71.31576ms)","trace[648221834] 'compare'  (duration: 37.626656ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T10:26:18.604623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:18.622187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:39.876463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:39.886969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:39.915264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:39.932393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34364","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [729dc85a7df2d4c932f095d2c39d9d6da7bd7512f04db51ca120e0979dc276c1] <==
	2025/12/13 10:27:11 GCP Auth Webhook started!
	2025/12/13 10:27:27 Ready to marshal response ...
	2025/12/13 10:27:27 Ready to write response ...
	2025/12/13 10:27:27 Ready to marshal response ...
	2025/12/13 10:27:27 Ready to write response ...
	2025/12/13 10:27:27 Ready to marshal response ...
	2025/12/13 10:27:27 Ready to write response ...
	2025/12/13 10:27:49 Ready to marshal response ...
	2025/12/13 10:27:49 Ready to write response ...
	2025/12/13 10:27:53 Ready to marshal response ...
	2025/12/13 10:27:53 Ready to write response ...
	2025/12/13 10:27:53 Ready to marshal response ...
	2025/12/13 10:27:53 Ready to write response ...
	2025/12/13 10:28:01 Ready to marshal response ...
	2025/12/13 10:28:01 Ready to write response ...
	2025/12/13 10:28:07 Ready to marshal response ...
	2025/12/13 10:28:07 Ready to write response ...
	2025/12/13 10:28:12 Ready to marshal response ...
	2025/12/13 10:28:12 Ready to write response ...
	2025/12/13 10:28:33 Ready to marshal response ...
	2025/12/13 10:28:33 Ready to write response ...
	2025/12/13 10:30:28 Ready to marshal response ...
	2025/12/13 10:30:28 Ready to write response ...
	
	
	==> kernel <==
	 10:30:30 up  2:13,  0 user,  load average: 0.70, 1.46, 1.52
	Linux addons-543946 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [40451bec4cc2625dade53eb6c1f0778cc9665d75785787a901b2ca8fe63f61db] <==
	I1213 10:28:24.929006       1 main.go:301] handling current node
	I1213 10:28:34.929243       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:28:34.929308       1 main.go:301] handling current node
	I1213 10:28:44.929346       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:28:44.929430       1 main.go:301] handling current node
	I1213 10:28:54.935614       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:28:54.935725       1 main.go:301] handling current node
	I1213 10:29:04.929634       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:29:04.929683       1 main.go:301] handling current node
	I1213 10:29:14.937541       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:29:14.937649       1 main.go:301] handling current node
	I1213 10:29:24.936756       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:29:24.936791       1 main.go:301] handling current node
	I1213 10:29:34.928908       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:29:34.929056       1 main.go:301] handling current node
	I1213 10:29:44.931587       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:29:44.931621       1 main.go:301] handling current node
	I1213 10:29:54.929372       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:29:54.929486       1 main.go:301] handling current node
	I1213 10:30:04.929097       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:30:04.929211       1 main.go:301] handling current node
	I1213 10:30:14.928771       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:30:14.928897       1 main.go:301] handling current node
	I1213 10:30:24.932492       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:30:24.932527       1 main.go:301] handling current node
	
	
	==> kube-apiserver [051e9f414ee6e9ab31fd97afb8184da1ef222b1c4f7dd9b0735f3e6282f04624] <==
	W1213 10:26:25.253960       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.83.249:443: connect: connection refused
	E1213 10:26:25.254011       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.83.249:443: connect: connection refused" logger="UnhandledError"
	W1213 10:26:25.255054       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.83.249:443: connect: connection refused
	E1213 10:26:25.255132       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.83.249:443: connect: connection refused" logger="UnhandledError"
	W1213 10:26:25.333220       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.83.249:443: connect: connection refused
	E1213 10:26:25.333353       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.83.249:443: connect: connection refused" logger="UnhandledError"
	W1213 10:26:39.868343       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1213 10:26:39.886982       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1213 10:26:39.915157       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1213 10:26:39.930336       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1213 10:27:01.939922       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 10:27:01.939944       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.61.160:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.61.160:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.61.160:443: connect: connection refused" logger="UnhandledError"
	E1213 10:27:01.940099       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1213 10:27:01.940588       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.61.160:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.61.160:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.61.160:443: connect: connection refused" logger="UnhandledError"
	I1213 10:27:02.030999       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1213 10:27:38.108441       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43278: use of closed network connection
	E1213 10:27:38.363358       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43316: use of closed network connection
	I1213 10:28:06.844088       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1213 10:28:07.150773       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.153.66"}
	I1213 10:28:20.476112       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1213 10:28:40.825313       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1213 10:30:28.566156       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.95.249"}
	
	
	==> kube-controller-manager [76b7938d7fbe387df34dbe103158724b60d8446770ffc2084ed0c6d9dcca5419] <==
	I1213 10:26:09.862553       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 10:26:09.875575       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 10:26:09.879857       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 10:26:09.881006       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1213 10:26:09.881074       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 10:26:09.881182       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 10:26:09.881120       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 10:26:09.881110       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 10:26:09.882497       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 10:26:09.882566       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 10:26:09.882523       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 10:26:09.883899       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 10:26:09.887296       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1213 10:26:09.887300       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1213 10:26:09.887457       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 10:26:09.888514       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 10:26:09.888518       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 10:26:29.876818       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1213 10:26:39.859346       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1213 10:26:39.859568       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1213 10:26:39.859629       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1213 10:26:39.899473       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1213 10:26:39.903245       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1213 10:26:39.960635       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 10:26:40.004400       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [ca309cac66452ff14d4cade2b7a47b20ec31fc85df9461959e22811849d21fec] <==
	I1213 10:26:11.713305       1 server_linux.go:53] "Using iptables proxy"
	I1213 10:26:11.853348       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 10:26:11.959284       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 10:26:11.959361       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1213 10:26:11.959467       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 10:26:12.028416       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 10:26:12.028473       1 server_linux.go:132] "Using iptables Proxier"
	I1213 10:26:12.034030       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 10:26:12.034321       1 server.go:527] "Version info" version="v1.34.2"
	I1213 10:26:12.034336       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 10:26:12.035921       1 config.go:200] "Starting service config controller"
	I1213 10:26:12.035932       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 10:26:12.035948       1 config.go:106] "Starting endpoint slice config controller"
	I1213 10:26:12.035952       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 10:26:12.035962       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 10:26:12.035966       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 10:26:12.036682       1 config.go:309] "Starting node config controller"
	I1213 10:26:12.036690       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 10:26:12.036696       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 10:26:12.138942       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 10:26:12.139167       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 10:26:12.139462       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [fe380896f1e4da12ca13e0f570ceba0fba5d8edef70e3de8c10f5159b3c36a8d] <==
	E1213 10:26:03.005841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 10:26:03.005940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 10:26:03.006038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 10:26:03.006140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 10:26:03.006525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 10:26:03.006624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 10:26:03.006640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 10:26:03.006255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 10:26:03.006734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 10:26:03.006788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 10:26:03.006832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 10:26:03.006692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 10:26:03.833182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 10:26:03.856848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 10:26:03.900610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 10:26:03.948507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 10:26:03.974632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 10:26:03.976131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 10:26:04.023541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 10:26:04.056648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 10:26:04.095769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 10:26:04.146600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 10:26:04.148756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 10:26:04.354827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1213 10:26:06.262601       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 10:28:40 addons-543946 kubelet[1284]: E1213 10:28:40.507751    1284 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e07f73903b8a2bc6dabf0f691f6ed665ddf04d80b0d701dcb0628e77c89616e7\": container with ID starting with e07f73903b8a2bc6dabf0f691f6ed665ddf04d80b0d701dcb0628e77c89616e7 not found: ID does not exist" containerID="e07f73903b8a2bc6dabf0f691f6ed665ddf04d80b0d701dcb0628e77c89616e7"
	Dec 13 10:28:40 addons-543946 kubelet[1284]: I1213 10:28:40.507793    1284 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e07f73903b8a2bc6dabf0f691f6ed665ddf04d80b0d701dcb0628e77c89616e7"} err="failed to get container status \"e07f73903b8a2bc6dabf0f691f6ed665ddf04d80b0d701dcb0628e77c89616e7\": rpc error: code = NotFound desc = could not find container \"e07f73903b8a2bc6dabf0f691f6ed665ddf04d80b0d701dcb0628e77c89616e7\": container with ID starting with e07f73903b8a2bc6dabf0f691f6ed665ddf04d80b0d701dcb0628e77c89616e7 not found: ID does not exist"
	Dec 13 10:28:40 addons-543946 kubelet[1284]: I1213 10:28:40.529990    1284 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^79797776-d80e-11f0-bdca-8aeee6c04dfa\") pod \"8e978bcf-5b33-4db2-b193-87db5ac5268c\" (UID: \"8e978bcf-5b33-4db2-b193-87db5ac5268c\") "
	Dec 13 10:28:40 addons-543946 kubelet[1284]: I1213 10:28:40.530061    1284 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crtb7\" (UniqueName: \"kubernetes.io/projected/8e978bcf-5b33-4db2-b193-87db5ac5268c-kube-api-access-crtb7\") pod \"8e978bcf-5b33-4db2-b193-87db5ac5268c\" (UID: \"8e978bcf-5b33-4db2-b193-87db5ac5268c\") "
	Dec 13 10:28:40 addons-543946 kubelet[1284]: I1213 10:28:40.530093    1284 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8e978bcf-5b33-4db2-b193-87db5ac5268c-gcp-creds\") pod \"8e978bcf-5b33-4db2-b193-87db5ac5268c\" (UID: \"8e978bcf-5b33-4db2-b193-87db5ac5268c\") "
	Dec 13 10:28:40 addons-543946 kubelet[1284]: I1213 10:28:40.530911    1284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e978bcf-5b33-4db2-b193-87db5ac5268c-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "8e978bcf-5b33-4db2-b193-87db5ac5268c" (UID: "8e978bcf-5b33-4db2-b193-87db5ac5268c"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 13 10:28:40 addons-543946 kubelet[1284]: I1213 10:28:40.540001    1284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e978bcf-5b33-4db2-b193-87db5ac5268c-kube-api-access-crtb7" (OuterVolumeSpecName: "kube-api-access-crtb7") pod "8e978bcf-5b33-4db2-b193-87db5ac5268c" (UID: "8e978bcf-5b33-4db2-b193-87db5ac5268c"). InnerVolumeSpecName "kube-api-access-crtb7". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 13 10:28:40 addons-543946 kubelet[1284]: I1213 10:28:40.540833    1284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^79797776-d80e-11f0-bdca-8aeee6c04dfa" (OuterVolumeSpecName: "task-pv-storage") pod "8e978bcf-5b33-4db2-b193-87db5ac5268c" (UID: "8e978bcf-5b33-4db2-b193-87db5ac5268c"). InnerVolumeSpecName "pvc-f8857b4d-bbfd-4adc-b1b4-161938dbada0". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 13 10:28:40 addons-543946 kubelet[1284]: I1213 10:28:40.631633    1284 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-f8857b4d-bbfd-4adc-b1b4-161938dbada0\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^79797776-d80e-11f0-bdca-8aeee6c04dfa\") on node \"addons-543946\" "
	Dec 13 10:28:40 addons-543946 kubelet[1284]: I1213 10:28:40.631675    1284 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-crtb7\" (UniqueName: \"kubernetes.io/projected/8e978bcf-5b33-4db2-b193-87db5ac5268c-kube-api-access-crtb7\") on node \"addons-543946\" DevicePath \"\""
	Dec 13 10:28:40 addons-543946 kubelet[1284]: I1213 10:28:40.631690    1284 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8e978bcf-5b33-4db2-b193-87db5ac5268c-gcp-creds\") on node \"addons-543946\" DevicePath \"\""
	Dec 13 10:28:40 addons-543946 kubelet[1284]: I1213 10:28:40.638177    1284 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-f8857b4d-bbfd-4adc-b1b4-161938dbada0" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^79797776-d80e-11f0-bdca-8aeee6c04dfa") on node "addons-543946"
	Dec 13 10:28:40 addons-543946 kubelet[1284]: I1213 10:28:40.732962    1284 reconciler_common.go:299] "Volume detached for volume \"pvc-f8857b4d-bbfd-4adc-b1b4-161938dbada0\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^79797776-d80e-11f0-bdca-8aeee6c04dfa\") on node \"addons-543946\" DevicePath \"\""
	Dec 13 10:28:41 addons-543946 kubelet[1284]: I1213 10:28:41.416842    1284 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e978bcf-5b33-4db2-b193-87db5ac5268c" path="/var/lib/kubelet/pods/8e978bcf-5b33-4db2-b193-87db5ac5268c/volumes"
	Dec 13 10:29:05 addons-543946 kubelet[1284]: E1213 10:29:05.554493    1284 manager.go:1116] Failed to create existing container: /crio-cc90cf5870f1a9780b92b4cb5857d5378ae3325efc79fcbcf4f7bef4e64a2016: Error finding container cc90cf5870f1a9780b92b4cb5857d5378ae3325efc79fcbcf4f7bef4e64a2016: Status 404 returned error can't find the container with id cc90cf5870f1a9780b92b4cb5857d5378ae3325efc79fcbcf4f7bef4e64a2016
	Dec 13 10:29:05 addons-543946 kubelet[1284]: E1213 10:29:05.557659    1284 manager.go:1116] Failed to create existing container: /docker/771f4b2573d1af3192d178e32d39d9fa5a4476bb767a301f6f5b8bfb5a73f1ef/crio-7cd7ef496a787f94eb6613b88f029032f68ff650476a78bb7c79dce7751266cf: Error finding container 7cd7ef496a787f94eb6613b88f029032f68ff650476a78bb7c79dce7751266cf: Status 404 returned error can't find the container with id 7cd7ef496a787f94eb6613b88f029032f68ff650476a78bb7c79dce7751266cf
	Dec 13 10:29:05 addons-543946 kubelet[1284]: E1213 10:29:05.558818    1284 manager.go:1116] Failed to create existing container: /crio-7cd7ef496a787f94eb6613b88f029032f68ff650476a78bb7c79dce7751266cf: Error finding container 7cd7ef496a787f94eb6613b88f029032f68ff650476a78bb7c79dce7751266cf: Status 404 returned error can't find the container with id 7cd7ef496a787f94eb6613b88f029032f68ff650476a78bb7c79dce7751266cf
	Dec 13 10:29:12 addons-543946 kubelet[1284]: I1213 10:29:12.413942    1284 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-w4p9x" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 10:29:30 addons-543946 kubelet[1284]: I1213 10:29:30.413521    1284 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-8blxf" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 10:29:37 addons-543946 kubelet[1284]: I1213 10:29:37.413985    1284 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-rd2tq" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 10:30:28 addons-543946 kubelet[1284]: I1213 10:30:28.419173    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d44d14f0-2cbc-4e09-b5d6-2fad4ea095b4-gcp-creds\") pod \"hello-world-app-5d498dc89-sxxkb\" (UID: \"d44d14f0-2cbc-4e09-b5d6-2fad4ea095b4\") " pod="default/hello-world-app-5d498dc89-sxxkb"
	Dec 13 10:30:28 addons-543946 kubelet[1284]: I1213 10:30:28.419769    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7zzd\" (UniqueName: \"kubernetes.io/projected/d44d14f0-2cbc-4e09-b5d6-2fad4ea095b4-kube-api-access-g7zzd\") pod \"hello-world-app-5d498dc89-sxxkb\" (UID: \"d44d14f0-2cbc-4e09-b5d6-2fad4ea095b4\") " pod="default/hello-world-app-5d498dc89-sxxkb"
	Dec 13 10:30:28 addons-543946 kubelet[1284]: W1213 10:30:28.700172    1284 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/771f4b2573d1af3192d178e32d39d9fa5a4476bb767a301f6f5b8bfb5a73f1ef/crio-cb09d7e009f6c8d13523e611a2a254c683c530995a18e3e33f97b86233e66194 WatchSource:0}: Error finding container cb09d7e009f6c8d13523e611a2a254c683c530995a18e3e33f97b86233e66194: Status 404 returned error can't find the container with id cb09d7e009f6c8d13523e611a2a254c683c530995a18e3e33f97b86233e66194
	Dec 13 10:30:29 addons-543946 kubelet[1284]: I1213 10:30:29.414679    1284 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-w4p9x" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 10:30:29 addons-543946 kubelet[1284]: I1213 10:30:29.896129    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-sxxkb" podStartSLOduration=1.257350571 podStartE2EDuration="1.896108399s" podCreationTimestamp="2025-12-13 10:30:28 +0000 UTC" firstStartedPulling="2025-12-13 10:30:28.704989218 +0000 UTC m=+263.418467319" lastFinishedPulling="2025-12-13 10:30:29.343747046 +0000 UTC m=+264.057225147" observedRunningTime="2025-12-13 10:30:29.895186116 +0000 UTC m=+264.608664225" watchObservedRunningTime="2025-12-13 10:30:29.896108399 +0000 UTC m=+264.609586508"
	
	
	==> storage-provisioner [cc0f178df84bbe390a441a840f219d69e66c7fd3620de6752d3ee094c40cdd59] <==
	W1213 10:30:05.445098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:07.448151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:07.452923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:09.456726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:09.464651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:11.468490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:11.473239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:13.476617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:13.481033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:15.484309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:15.488498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:17.492583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:17.497051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:19.500400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:19.506835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:21.509900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:21.514720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:23.517563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:23.522733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:25.526436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:25.534930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:27.539240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:27.546174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:29.549971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:30:29.557936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-543946 -n addons-543946
helpers_test.go:270: (dbg) Run:  kubectl --context addons-543946 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-5dcld ingress-nginx-admission-patch-qvvht registry-creds-764b6fb674-sgjj5
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-543946 describe pod ingress-nginx-admission-create-5dcld ingress-nginx-admission-patch-qvvht registry-creds-764b6fb674-sgjj5
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-543946 describe pod ingress-nginx-admission-create-5dcld ingress-nginx-admission-patch-qvvht registry-creds-764b6fb674-sgjj5: exit status 1 (86.577597ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-5dcld" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-qvvht" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-sgjj5" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-543946 describe pod ingress-nginx-admission-create-5dcld ingress-nginx-admission-patch-qvvht registry-creds-764b6fb674-sgjj5: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-543946 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-543946 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (312.801845ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:30:31.701002  366738 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:30:31.701908  366738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:30:31.701950  366738 out.go:374] Setting ErrFile to fd 2...
	I1213 10:30:31.701974  366738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:30:31.702277  366738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:30:31.702781  366738 mustload.go:66] Loading cluster: addons-543946
	I1213 10:30:31.703349  366738 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:30:31.703392  366738 addons.go:622] checking whether the cluster is paused
	I1213 10:30:31.703659  366738 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:30:31.703702  366738 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:30:31.704518  366738 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:30:31.732437  366738 ssh_runner.go:195] Run: systemctl --version
	I1213 10:30:31.732495  366738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:30:31.765190  366738 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:30:31.880639  366738 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:30:31.880728  366738 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:30:31.922669  366738 cri.go:89] found id: "9773f1dabc6fdab993282d91bf08c5bfc6cdb97f43f4747d34fbeefb5a9e8428"
	I1213 10:30:31.922690  366738 cri.go:89] found id: "ad42e673ec298eaaa7db2c08f5f8920f71246374f8e3379263c303f8bec7be5f"
	I1213 10:30:31.922695  366738 cri.go:89] found id: "91959e0b370171d408bed0fa52d4779da678474a77f884e8f856aaa174adc963"
	I1213 10:30:31.922699  366738 cri.go:89] found id: "ade5c570c4dbe1b93fd2561bd1346b042babb17e2cf39dfd555bd5fd97e8b622"
	I1213 10:30:31.922702  366738 cri.go:89] found id: "7710a35bda17f7d94abb6d5449fdca661d858863ec7aaa9df850aa1ff0c8345a"
	I1213 10:30:31.922706  366738 cri.go:89] found id: "2cc901f4d3fb002e6510963d1c14958538efb9f5f9655d576a398653295bab78"
	I1213 10:30:31.922709  366738 cri.go:89] found id: "ead471b4c6339a21f7d4642f7382e3e17c6bc67840d5597c9f1ba7d03a90ad51"
	I1213 10:30:31.922712  366738 cri.go:89] found id: "59d844a8a4aeddcbc54c84b89e2b13932aafef3528c0c7ed2fe1d6977efc0da4"
	I1213 10:30:31.922714  366738 cri.go:89] found id: "0bcd4d507bd4ada1761081111aa585e0302721ff2aa31a88b8a4ed23ef769c46"
	I1213 10:30:31.922720  366738 cri.go:89] found id: "9df3579774fb7e75da64c468015d647c1c846c2c3a3661e9cad4e7625c077819"
	I1213 10:30:31.922723  366738 cri.go:89] found id: "f8f4b2d0d0ca01cb0298f234ed98f38bb35db421dd1ee0ecfa35f42af8a048ea"
	I1213 10:30:31.922726  366738 cri.go:89] found id: "82ec4f0d273933dd1ab10c4541bc8ece9fcf638d0fadf7561a9a044e0d84b3e3"
	I1213 10:30:31.922729  366738 cri.go:89] found id: "46a1e5bc6867179fb782aaa1b961e54bb568eba367df5af5c2b1a32cc3432bcf"
	I1213 10:30:31.922732  366738 cri.go:89] found id: "abd7fd4640572a8d631c9a4b53b0b54dbb8e15303b0aad7c22abe4c2fd31d2f9"
	I1213 10:30:31.922735  366738 cri.go:89] found id: "ca06350334c8245e57a53c1956fea31819d2cec020bfc2d72fdf601430141c8e"
	I1213 10:30:31.922739  366738 cri.go:89] found id: "d5c5cc43186b72930aa32f3cf24a96d8bf357cebf0358db4413edf761499d0af"
	I1213 10:30:31.922742  366738 cri.go:89] found id: "cc0f178df84bbe390a441a840f219d69e66c7fd3620de6752d3ee094c40cdd59"
	I1213 10:30:31.922746  366738 cri.go:89] found id: "40451bec4cc2625dade53eb6c1f0778cc9665d75785787a901b2ca8fe63f61db"
	I1213 10:30:31.922749  366738 cri.go:89] found id: "ca309cac66452ff14d4cade2b7a47b20ec31fc85df9461959e22811849d21fec"
	I1213 10:30:31.922752  366738 cri.go:89] found id: "051e9f414ee6e9ab31fd97afb8184da1ef222b1c4f7dd9b0735f3e6282f04624"
	I1213 10:30:31.922757  366738 cri.go:89] found id: "cf448155f622a981b7e826d6176c5a79e1d0b75a3b353485a3e5065aa49ad951"
	I1213 10:30:31.922760  366738 cri.go:89] found id: "76b7938d7fbe387df34dbe103158724b60d8446770ffc2084ed0c6d9dcca5419"
	I1213 10:30:31.922763  366738 cri.go:89] found id: "fe380896f1e4da12ca13e0f570ceba0fba5d8edef70e3de8c10f5159b3c36a8d"
	I1213 10:30:31.922774  366738 cri.go:89] found id: ""
	I1213 10:30:31.922826  366738 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:30:31.938602  366738 out.go:203] 
	W1213 10:30:31.941509  366738 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:30:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:30:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:30:31.941533  366738 out.go:285] * 
	* 
	W1213 10:30:31.947199  366738 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:30:31.950004  366738 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-543946 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-543946 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-543946 addons disable ingress --alsologtostderr -v=1: exit status 11 (269.065168ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:30:32.011837  366852 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:30:32.012761  366852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:30:32.012824  366852 out.go:374] Setting ErrFile to fd 2...
	I1213 10:30:32.012846  366852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:30:32.013172  366852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:30:32.013549  366852 mustload.go:66] Loading cluster: addons-543946
	I1213 10:30:32.014044  366852 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:30:32.014093  366852 addons.go:622] checking whether the cluster is paused
	I1213 10:30:32.014241  366852 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:30:32.014281  366852 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:30:32.014843  366852 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:30:32.033911  366852 ssh_runner.go:195] Run: systemctl --version
	I1213 10:30:32.033977  366852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:30:32.053353  366852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:30:32.158110  366852 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:30:32.158223  366852 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:30:32.191644  366852 cri.go:89] found id: "9773f1dabc6fdab993282d91bf08c5bfc6cdb97f43f4747d34fbeefb5a9e8428"
	I1213 10:30:32.191669  366852 cri.go:89] found id: "ad42e673ec298eaaa7db2c08f5f8920f71246374f8e3379263c303f8bec7be5f"
	I1213 10:30:32.191674  366852 cri.go:89] found id: "91959e0b370171d408bed0fa52d4779da678474a77f884e8f856aaa174adc963"
	I1213 10:30:32.191679  366852 cri.go:89] found id: "ade5c570c4dbe1b93fd2561bd1346b042babb17e2cf39dfd555bd5fd97e8b622"
	I1213 10:30:32.191682  366852 cri.go:89] found id: "7710a35bda17f7d94abb6d5449fdca661d858863ec7aaa9df850aa1ff0c8345a"
	I1213 10:30:32.191687  366852 cri.go:89] found id: "2cc901f4d3fb002e6510963d1c14958538efb9f5f9655d576a398653295bab78"
	I1213 10:30:32.191690  366852 cri.go:89] found id: "ead471b4c6339a21f7d4642f7382e3e17c6bc67840d5597c9f1ba7d03a90ad51"
	I1213 10:30:32.191694  366852 cri.go:89] found id: "59d844a8a4aeddcbc54c84b89e2b13932aafef3528c0c7ed2fe1d6977efc0da4"
	I1213 10:30:32.191697  366852 cri.go:89] found id: "0bcd4d507bd4ada1761081111aa585e0302721ff2aa31a88b8a4ed23ef769c46"
	I1213 10:30:32.191703  366852 cri.go:89] found id: "9df3579774fb7e75da64c468015d647c1c846c2c3a3661e9cad4e7625c077819"
	I1213 10:30:32.191707  366852 cri.go:89] found id: "f8f4b2d0d0ca01cb0298f234ed98f38bb35db421dd1ee0ecfa35f42af8a048ea"
	I1213 10:30:32.191710  366852 cri.go:89] found id: "82ec4f0d273933dd1ab10c4541bc8ece9fcf638d0fadf7561a9a044e0d84b3e3"
	I1213 10:30:32.191714  366852 cri.go:89] found id: "46a1e5bc6867179fb782aaa1b961e54bb568eba367df5af5c2b1a32cc3432bcf"
	I1213 10:30:32.191718  366852 cri.go:89] found id: "abd7fd4640572a8d631c9a4b53b0b54dbb8e15303b0aad7c22abe4c2fd31d2f9"
	I1213 10:30:32.191722  366852 cri.go:89] found id: "ca06350334c8245e57a53c1956fea31819d2cec020bfc2d72fdf601430141c8e"
	I1213 10:30:32.191726  366852 cri.go:89] found id: "d5c5cc43186b72930aa32f3cf24a96d8bf357cebf0358db4413edf761499d0af"
	I1213 10:30:32.191730  366852 cri.go:89] found id: "cc0f178df84bbe390a441a840f219d69e66c7fd3620de6752d3ee094c40cdd59"
	I1213 10:30:32.191735  366852 cri.go:89] found id: "40451bec4cc2625dade53eb6c1f0778cc9665d75785787a901b2ca8fe63f61db"
	I1213 10:30:32.191738  366852 cri.go:89] found id: "ca309cac66452ff14d4cade2b7a47b20ec31fc85df9461959e22811849d21fec"
	I1213 10:30:32.191741  366852 cri.go:89] found id: "051e9f414ee6e9ab31fd97afb8184da1ef222b1c4f7dd9b0735f3e6282f04624"
	I1213 10:30:32.191747  366852 cri.go:89] found id: "cf448155f622a981b7e826d6176c5a79e1d0b75a3b353485a3e5065aa49ad951"
	I1213 10:30:32.191758  366852 cri.go:89] found id: "76b7938d7fbe387df34dbe103158724b60d8446770ffc2084ed0c6d9dcca5419"
	I1213 10:30:32.191761  366852 cri.go:89] found id: "fe380896f1e4da12ca13e0f570ceba0fba5d8edef70e3de8c10f5159b3c36a8d"
	I1213 10:30:32.191765  366852 cri.go:89] found id: ""
	I1213 10:30:32.191818  366852 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:30:32.206577  366852 out.go:203] 
	W1213 10:30:32.209598  366852 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:30:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:30:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:30:32.209622  366852 out.go:285] * 
	* 
	W1213 10:30:32.215357  366852 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:30:32.218241  366852 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-543946 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.70s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-lqcbm" [add1259b-aca1-4feb-a263-6600a051a55a] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003563887s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-543946 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-543946 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (264.401661ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:28:46.801697  365701 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:28:46.802556  365701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:28:46.802607  365701 out.go:374] Setting ErrFile to fd 2...
	I1213 10:28:46.802629  365701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:28:46.802953  365701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:28:46.803293  365701 mustload.go:66] Loading cluster: addons-543946
	I1213 10:28:46.803758  365701 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:28:46.803804  365701 addons.go:622] checking whether the cluster is paused
	I1213 10:28:46.803939  365701 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:28:46.803974  365701 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:28:46.804527  365701 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:28:46.821748  365701 ssh_runner.go:195] Run: systemctl --version
	I1213 10:28:46.821805  365701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:28:46.840158  365701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:28:46.946052  365701 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:28:46.946137  365701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:28:46.981274  365701 cri.go:89] found id: "9773f1dabc6fdab993282d91bf08c5bfc6cdb97f43f4747d34fbeefb5a9e8428"
	I1213 10:28:46.981297  365701 cri.go:89] found id: "ad42e673ec298eaaa7db2c08f5f8920f71246374f8e3379263c303f8bec7be5f"
	I1213 10:28:46.981302  365701 cri.go:89] found id: "91959e0b370171d408bed0fa52d4779da678474a77f884e8f856aaa174adc963"
	I1213 10:28:46.981306  365701 cri.go:89] found id: "ade5c570c4dbe1b93fd2561bd1346b042babb17e2cf39dfd555bd5fd97e8b622"
	I1213 10:28:46.981310  365701 cri.go:89] found id: "7710a35bda17f7d94abb6d5449fdca661d858863ec7aaa9df850aa1ff0c8345a"
	I1213 10:28:46.981313  365701 cri.go:89] found id: "2cc901f4d3fb002e6510963d1c14958538efb9f5f9655d576a398653295bab78"
	I1213 10:28:46.981316  365701 cri.go:89] found id: "ead471b4c6339a21f7d4642f7382e3e17c6bc67840d5597c9f1ba7d03a90ad51"
	I1213 10:28:46.981319  365701 cri.go:89] found id: "59d844a8a4aeddcbc54c84b89e2b13932aafef3528c0c7ed2fe1d6977efc0da4"
	I1213 10:28:46.981323  365701 cri.go:89] found id: "0bcd4d507bd4ada1761081111aa585e0302721ff2aa31a88b8a4ed23ef769c46"
	I1213 10:28:46.981329  365701 cri.go:89] found id: "9df3579774fb7e75da64c468015d647c1c846c2c3a3661e9cad4e7625c077819"
	I1213 10:28:46.981333  365701 cri.go:89] found id: "f8f4b2d0d0ca01cb0298f234ed98f38bb35db421dd1ee0ecfa35f42af8a048ea"
	I1213 10:28:46.981336  365701 cri.go:89] found id: "82ec4f0d273933dd1ab10c4541bc8ece9fcf638d0fadf7561a9a044e0d84b3e3"
	I1213 10:28:46.981340  365701 cri.go:89] found id: "46a1e5bc6867179fb782aaa1b961e54bb568eba367df5af5c2b1a32cc3432bcf"
	I1213 10:28:46.981343  365701 cri.go:89] found id: "abd7fd4640572a8d631c9a4b53b0b54dbb8e15303b0aad7c22abe4c2fd31d2f9"
	I1213 10:28:46.981346  365701 cri.go:89] found id: "ca06350334c8245e57a53c1956fea31819d2cec020bfc2d72fdf601430141c8e"
	I1213 10:28:46.981359  365701 cri.go:89] found id: "d5c5cc43186b72930aa32f3cf24a96d8bf357cebf0358db4413edf761499d0af"
	I1213 10:28:46.981366  365701 cri.go:89] found id: "cc0f178df84bbe390a441a840f219d69e66c7fd3620de6752d3ee094c40cdd59"
	I1213 10:28:46.981372  365701 cri.go:89] found id: "40451bec4cc2625dade53eb6c1f0778cc9665d75785787a901b2ca8fe63f61db"
	I1213 10:28:46.981375  365701 cri.go:89] found id: "ca309cac66452ff14d4cade2b7a47b20ec31fc85df9461959e22811849d21fec"
	I1213 10:28:46.981378  365701 cri.go:89] found id: "051e9f414ee6e9ab31fd97afb8184da1ef222b1c4f7dd9b0735f3e6282f04624"
	I1213 10:28:46.981383  365701 cri.go:89] found id: "cf448155f622a981b7e826d6176c5a79e1d0b75a3b353485a3e5065aa49ad951"
	I1213 10:28:46.981390  365701 cri.go:89] found id: "76b7938d7fbe387df34dbe103158724b60d8446770ffc2084ed0c6d9dcca5419"
	I1213 10:28:46.981393  365701 cri.go:89] found id: "fe380896f1e4da12ca13e0f570ceba0fba5d8edef70e3de8c10f5159b3c36a8d"
	I1213 10:28:46.981396  365701 cri.go:89] found id: ""
	I1213 10:28:46.981447  365701 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:28:46.996877  365701 out.go:203] 
	W1213 10:28:46.999784  365701 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:28:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:28:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:28:46.999815  365701 out.go:285] * 
	* 
	W1213 10:28:47.005384  365701 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:28:47.008332  365701 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-543946 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 4.005928ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-h5rdh" [d30aae3c-2ad4-4e72-85a3-0ee845487f8e] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005065128s
addons_test.go:465: (dbg) Run:  kubectl --context addons-543946 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-543946 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-543946 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (260.620635ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:28:06.315652  364739 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:28:06.316447  364739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:28:06.316488  364739 out.go:374] Setting ErrFile to fd 2...
	I1213 10:28:06.316509  364739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:28:06.316805  364739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:28:06.317144  364739 mustload.go:66] Loading cluster: addons-543946
	I1213 10:28:06.317602  364739 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:28:06.317643  364739 addons.go:622] checking whether the cluster is paused
	I1213 10:28:06.317789  364739 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:28:06.317821  364739 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:28:06.318344  364739 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:28:06.338749  364739 ssh_runner.go:195] Run: systemctl --version
	I1213 10:28:06.338814  364739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:28:06.357521  364739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:28:06.462156  364739 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:28:06.462245  364739 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:28:06.492704  364739 cri.go:89] found id: "9773f1dabc6fdab993282d91bf08c5bfc6cdb97f43f4747d34fbeefb5a9e8428"
	I1213 10:28:06.492725  364739 cri.go:89] found id: "ad42e673ec298eaaa7db2c08f5f8920f71246374f8e3379263c303f8bec7be5f"
	I1213 10:28:06.492731  364739 cri.go:89] found id: "91959e0b370171d408bed0fa52d4779da678474a77f884e8f856aaa174adc963"
	I1213 10:28:06.492734  364739 cri.go:89] found id: "ade5c570c4dbe1b93fd2561bd1346b042babb17e2cf39dfd555bd5fd97e8b622"
	I1213 10:28:06.492738  364739 cri.go:89] found id: "7710a35bda17f7d94abb6d5449fdca661d858863ec7aaa9df850aa1ff0c8345a"
	I1213 10:28:06.492742  364739 cri.go:89] found id: "2cc901f4d3fb002e6510963d1c14958538efb9f5f9655d576a398653295bab78"
	I1213 10:28:06.492766  364739 cri.go:89] found id: "ead471b4c6339a21f7d4642f7382e3e17c6bc67840d5597c9f1ba7d03a90ad51"
	I1213 10:28:06.492781  364739 cri.go:89] found id: "59d844a8a4aeddcbc54c84b89e2b13932aafef3528c0c7ed2fe1d6977efc0da4"
	I1213 10:28:06.492784  364739 cri.go:89] found id: "0bcd4d507bd4ada1761081111aa585e0302721ff2aa31a88b8a4ed23ef769c46"
	I1213 10:28:06.492793  364739 cri.go:89] found id: "9df3579774fb7e75da64c468015d647c1c846c2c3a3661e9cad4e7625c077819"
	I1213 10:28:06.492796  364739 cri.go:89] found id: "f8f4b2d0d0ca01cb0298f234ed98f38bb35db421dd1ee0ecfa35f42af8a048ea"
	I1213 10:28:06.492799  364739 cri.go:89] found id: "82ec4f0d273933dd1ab10c4541bc8ece9fcf638d0fadf7561a9a044e0d84b3e3"
	I1213 10:28:06.492803  364739 cri.go:89] found id: "46a1e5bc6867179fb782aaa1b961e54bb568eba367df5af5c2b1a32cc3432bcf"
	I1213 10:28:06.492814  364739 cri.go:89] found id: "abd7fd4640572a8d631c9a4b53b0b54dbb8e15303b0aad7c22abe4c2fd31d2f9"
	I1213 10:28:06.492817  364739 cri.go:89] found id: "ca06350334c8245e57a53c1956fea31819d2cec020bfc2d72fdf601430141c8e"
	I1213 10:28:06.492827  364739 cri.go:89] found id: "d5c5cc43186b72930aa32f3cf24a96d8bf357cebf0358db4413edf761499d0af"
	I1213 10:28:06.492839  364739 cri.go:89] found id: "cc0f178df84bbe390a441a840f219d69e66c7fd3620de6752d3ee094c40cdd59"
	I1213 10:28:06.492844  364739 cri.go:89] found id: "40451bec4cc2625dade53eb6c1f0778cc9665d75785787a901b2ca8fe63f61db"
	I1213 10:28:06.492847  364739 cri.go:89] found id: "ca309cac66452ff14d4cade2b7a47b20ec31fc85df9461959e22811849d21fec"
	I1213 10:28:06.492850  364739 cri.go:89] found id: "051e9f414ee6e9ab31fd97afb8184da1ef222b1c4f7dd9b0735f3e6282f04624"
	I1213 10:28:06.492855  364739 cri.go:89] found id: "cf448155f622a981b7e826d6176c5a79e1d0b75a3b353485a3e5065aa49ad951"
	I1213 10:28:06.492858  364739 cri.go:89] found id: "76b7938d7fbe387df34dbe103158724b60d8446770ffc2084ed0c6d9dcca5419"
	I1213 10:28:06.492861  364739 cri.go:89] found id: "fe380896f1e4da12ca13e0f570ceba0fba5d8edef70e3de8c10f5159b3c36a8d"
	I1213 10:28:06.492864  364739 cri.go:89] found id: ""
	I1213 10:28:06.492918  364739 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:28:06.508461  364739 out.go:203] 
	W1213 10:28:06.511610  364739 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:28:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:28:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:28:06.511643  364739 out.go:285] * 
	* 
	W1213 10:28:06.517194  364739 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:28:06.520221  364739 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-543946 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.39s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1213 10:28:01.936103  356328 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1213 10:28:01.939324  356328 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 10:28:01.939356  356328 kapi.go:107] duration metric: took 3.264306ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.274309ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-543946 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-543946 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [ca733716-4692-4da2-8c4e-7039a528e3e3] Pending
helpers_test.go:353: "task-pv-pod" [ca733716-4692-4da2-8c4e-7039a528e3e3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [ca733716-4692-4da2-8c4e-7039a528e3e3] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003143808s
addons_test.go:574: (dbg) Run:  kubectl --context addons-543946 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-543946 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-543946 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-543946 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-543946 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-543946 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-543946 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [8e978bcf-5b33-4db2-b193-87db5ac5268c] Pending
helpers_test.go:353: "task-pv-pod-restore" [8e978bcf-5b33-4db2-b193-87db5ac5268c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [8e978bcf-5b33-4db2-b193-87db5ac5268c] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004133961s
addons_test.go:616: (dbg) Run:  kubectl --context addons-543946 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-543946 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-543946 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-543946 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-543946 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (255.291547ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:28:41.278045  365598 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:28:41.278932  365598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:28:41.278975  365598 out.go:374] Setting ErrFile to fd 2...
	I1213 10:28:41.279015  365598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:28:41.279392  365598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:28:41.279791  365598 mustload.go:66] Loading cluster: addons-543946
	I1213 10:28:41.280218  365598 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:28:41.280255  365598 addons.go:622] checking whether the cluster is paused
	I1213 10:28:41.280406  365598 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:28:41.280437  365598 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:28:41.280996  365598 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:28:41.297939  365598 ssh_runner.go:195] Run: systemctl --version
	I1213 10:28:41.298077  365598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:28:41.316000  365598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:28:41.425910  365598 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:28:41.426023  365598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:28:41.453524  365598 cri.go:89] found id: "9773f1dabc6fdab993282d91bf08c5bfc6cdb97f43f4747d34fbeefb5a9e8428"
	I1213 10:28:41.453543  365598 cri.go:89] found id: "ad42e673ec298eaaa7db2c08f5f8920f71246374f8e3379263c303f8bec7be5f"
	I1213 10:28:41.453549  365598 cri.go:89] found id: "91959e0b370171d408bed0fa52d4779da678474a77f884e8f856aaa174adc963"
	I1213 10:28:41.453553  365598 cri.go:89] found id: "ade5c570c4dbe1b93fd2561bd1346b042babb17e2cf39dfd555bd5fd97e8b622"
	I1213 10:28:41.453556  365598 cri.go:89] found id: "7710a35bda17f7d94abb6d5449fdca661d858863ec7aaa9df850aa1ff0c8345a"
	I1213 10:28:41.453560  365598 cri.go:89] found id: "2cc901f4d3fb002e6510963d1c14958538efb9f5f9655d576a398653295bab78"
	I1213 10:28:41.453562  365598 cri.go:89] found id: "ead471b4c6339a21f7d4642f7382e3e17c6bc67840d5597c9f1ba7d03a90ad51"
	I1213 10:28:41.453565  365598 cri.go:89] found id: "59d844a8a4aeddcbc54c84b89e2b13932aafef3528c0c7ed2fe1d6977efc0da4"
	I1213 10:28:41.453592  365598 cri.go:89] found id: "0bcd4d507bd4ada1761081111aa585e0302721ff2aa31a88b8a4ed23ef769c46"
	I1213 10:28:41.453598  365598 cri.go:89] found id: "9df3579774fb7e75da64c468015d647c1c846c2c3a3661e9cad4e7625c077819"
	I1213 10:28:41.453604  365598 cri.go:89] found id: "f8f4b2d0d0ca01cb0298f234ed98f38bb35db421dd1ee0ecfa35f42af8a048ea"
	I1213 10:28:41.453609  365598 cri.go:89] found id: "82ec4f0d273933dd1ab10c4541bc8ece9fcf638d0fadf7561a9a044e0d84b3e3"
	I1213 10:28:41.453612  365598 cri.go:89] found id: "46a1e5bc6867179fb782aaa1b961e54bb568eba367df5af5c2b1a32cc3432bcf"
	I1213 10:28:41.453615  365598 cri.go:89] found id: "abd7fd4640572a8d631c9a4b53b0b54dbb8e15303b0aad7c22abe4c2fd31d2f9"
	I1213 10:28:41.453618  365598 cri.go:89] found id: "ca06350334c8245e57a53c1956fea31819d2cec020bfc2d72fdf601430141c8e"
	I1213 10:28:41.453635  365598 cri.go:89] found id: "d5c5cc43186b72930aa32f3cf24a96d8bf357cebf0358db4413edf761499d0af"
	I1213 10:28:41.453639  365598 cri.go:89] found id: "cc0f178df84bbe390a441a840f219d69e66c7fd3620de6752d3ee094c40cdd59"
	I1213 10:28:41.453643  365598 cri.go:89] found id: "40451bec4cc2625dade53eb6c1f0778cc9665d75785787a901b2ca8fe63f61db"
	I1213 10:28:41.453646  365598 cri.go:89] found id: "ca309cac66452ff14d4cade2b7a47b20ec31fc85df9461959e22811849d21fec"
	I1213 10:28:41.453659  365598 cri.go:89] found id: "051e9f414ee6e9ab31fd97afb8184da1ef222b1c4f7dd9b0735f3e6282f04624"
	I1213 10:28:41.453669  365598 cri.go:89] found id: "cf448155f622a981b7e826d6176c5a79e1d0b75a3b353485a3e5065aa49ad951"
	I1213 10:28:41.453675  365598 cri.go:89] found id: "76b7938d7fbe387df34dbe103158724b60d8446770ffc2084ed0c6d9dcca5419"
	I1213 10:28:41.453678  365598 cri.go:89] found id: "fe380896f1e4da12ca13e0f570ceba0fba5d8edef70e3de8c10f5159b3c36a8d"
	I1213 10:28:41.453681  365598 cri.go:89] found id: ""
	I1213 10:28:41.453739  365598 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:28:41.468224  365598 out.go:203] 
	W1213 10:28:41.471229  365598 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:28:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:28:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:28:41.471254  365598 out.go:285] * 
	* 
	W1213 10:28:41.476892  365598 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:28:41.479882  365598 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-543946 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-543946 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-543946 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (256.805566ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:28:41.537998  365640 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:28:41.538916  365640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:28:41.538955  365640 out.go:374] Setting ErrFile to fd 2...
	I1213 10:28:41.538976  365640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:28:41.539291  365640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:28:41.539690  365640 mustload.go:66] Loading cluster: addons-543946
	I1213 10:28:41.540093  365640 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:28:41.540137  365640 addons.go:622] checking whether the cluster is paused
	I1213 10:28:41.540269  365640 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:28:41.540305  365640 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:28:41.540834  365640 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:28:41.558300  365640 ssh_runner.go:195] Run: systemctl --version
	I1213 10:28:41.558360  365640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:28:41.576767  365640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:28:41.678323  365640 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:28:41.678418  365640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:28:41.711410  365640 cri.go:89] found id: "9773f1dabc6fdab993282d91bf08c5bfc6cdb97f43f4747d34fbeefb5a9e8428"
	I1213 10:28:41.711445  365640 cri.go:89] found id: "ad42e673ec298eaaa7db2c08f5f8920f71246374f8e3379263c303f8bec7be5f"
	I1213 10:28:41.711451  365640 cri.go:89] found id: "91959e0b370171d408bed0fa52d4779da678474a77f884e8f856aaa174adc963"
	I1213 10:28:41.711455  365640 cri.go:89] found id: "ade5c570c4dbe1b93fd2561bd1346b042babb17e2cf39dfd555bd5fd97e8b622"
	I1213 10:28:41.711458  365640 cri.go:89] found id: "7710a35bda17f7d94abb6d5449fdca661d858863ec7aaa9df850aa1ff0c8345a"
	I1213 10:28:41.711462  365640 cri.go:89] found id: "2cc901f4d3fb002e6510963d1c14958538efb9f5f9655d576a398653295bab78"
	I1213 10:28:41.711467  365640 cri.go:89] found id: "ead471b4c6339a21f7d4642f7382e3e17c6bc67840d5597c9f1ba7d03a90ad51"
	I1213 10:28:41.711470  365640 cri.go:89] found id: "59d844a8a4aeddcbc54c84b89e2b13932aafef3528c0c7ed2fe1d6977efc0da4"
	I1213 10:28:41.711474  365640 cri.go:89] found id: "0bcd4d507bd4ada1761081111aa585e0302721ff2aa31a88b8a4ed23ef769c46"
	I1213 10:28:41.711480  365640 cri.go:89] found id: "9df3579774fb7e75da64c468015d647c1c846c2c3a3661e9cad4e7625c077819"
	I1213 10:28:41.711484  365640 cri.go:89] found id: "f8f4b2d0d0ca01cb0298f234ed98f38bb35db421dd1ee0ecfa35f42af8a048ea"
	I1213 10:28:41.711488  365640 cri.go:89] found id: "82ec4f0d273933dd1ab10c4541bc8ece9fcf638d0fadf7561a9a044e0d84b3e3"
	I1213 10:28:41.711496  365640 cri.go:89] found id: "46a1e5bc6867179fb782aaa1b961e54bb568eba367df5af5c2b1a32cc3432bcf"
	I1213 10:28:41.711500  365640 cri.go:89] found id: "abd7fd4640572a8d631c9a4b53b0b54dbb8e15303b0aad7c22abe4c2fd31d2f9"
	I1213 10:28:41.711503  365640 cri.go:89] found id: "ca06350334c8245e57a53c1956fea31819d2cec020bfc2d72fdf601430141c8e"
	I1213 10:28:41.711546  365640 cri.go:89] found id: "d5c5cc43186b72930aa32f3cf24a96d8bf357cebf0358db4413edf761499d0af"
	I1213 10:28:41.711555  365640 cri.go:89] found id: "cc0f178df84bbe390a441a840f219d69e66c7fd3620de6752d3ee094c40cdd59"
	I1213 10:28:41.711560  365640 cri.go:89] found id: "40451bec4cc2625dade53eb6c1f0778cc9665d75785787a901b2ca8fe63f61db"
	I1213 10:28:41.711563  365640 cri.go:89] found id: "ca309cac66452ff14d4cade2b7a47b20ec31fc85df9461959e22811849d21fec"
	I1213 10:28:41.711566  365640 cri.go:89] found id: "051e9f414ee6e9ab31fd97afb8184da1ef222b1c4f7dd9b0735f3e6282f04624"
	I1213 10:28:41.711571  365640 cri.go:89] found id: "cf448155f622a981b7e826d6176c5a79e1d0b75a3b353485a3e5065aa49ad951"
	I1213 10:28:41.711575  365640 cri.go:89] found id: "76b7938d7fbe387df34dbe103158724b60d8446770ffc2084ed0c6d9dcca5419"
	I1213 10:28:41.711578  365640 cri.go:89] found id: "fe380896f1e4da12ca13e0f570ceba0fba5d8edef70e3de8c10f5159b3c36a8d"
	I1213 10:28:41.711581  365640 cri.go:89] found id: ""
	I1213 10:28:41.711632  365640 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:28:41.726431  365640 out.go:203] 
	W1213 10:28:41.729289  365640 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:28:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:28:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:28:41.729315  365640 out.go:285] * 
	* 
	W1213 10:28:41.734879  365640 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:28:41.737758  365640 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-543946 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (39.81s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-543946 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-543946 --alsologtostderr -v=1: exit status 11 (282.62573ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:27:38.836777  363504 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:27:38.837737  363504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:38.837758  363504 out.go:374] Setting ErrFile to fd 2...
	I1213 10:27:38.837765  363504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:38.838082  363504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:27:38.838404  363504 mustload.go:66] Loading cluster: addons-543946
	I1213 10:27:38.838846  363504 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:27:38.838867  363504 addons.go:622] checking whether the cluster is paused
	I1213 10:27:38.838991  363504 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:27:38.839009  363504 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:27:38.839543  363504 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:27:38.861968  363504 ssh_runner.go:195] Run: systemctl --version
	I1213 10:27:38.862032  363504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:27:38.879176  363504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:27:38.986319  363504 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:27:38.986454  363504 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:27:39.024499  363504 cri.go:89] found id: "9773f1dabc6fdab993282d91bf08c5bfc6cdb97f43f4747d34fbeefb5a9e8428"
	I1213 10:27:39.024522  363504 cri.go:89] found id: "ad42e673ec298eaaa7db2c08f5f8920f71246374f8e3379263c303f8bec7be5f"
	I1213 10:27:39.024527  363504 cri.go:89] found id: "91959e0b370171d408bed0fa52d4779da678474a77f884e8f856aaa174adc963"
	I1213 10:27:39.024531  363504 cri.go:89] found id: "ade5c570c4dbe1b93fd2561bd1346b042babb17e2cf39dfd555bd5fd97e8b622"
	I1213 10:27:39.024535  363504 cri.go:89] found id: "7710a35bda17f7d94abb6d5449fdca661d858863ec7aaa9df850aa1ff0c8345a"
	I1213 10:27:39.024539  363504 cri.go:89] found id: "2cc901f4d3fb002e6510963d1c14958538efb9f5f9655d576a398653295bab78"
	I1213 10:27:39.024543  363504 cri.go:89] found id: "ead471b4c6339a21f7d4642f7382e3e17c6bc67840d5597c9f1ba7d03a90ad51"
	I1213 10:27:39.024546  363504 cri.go:89] found id: "59d844a8a4aeddcbc54c84b89e2b13932aafef3528c0c7ed2fe1d6977efc0da4"
	I1213 10:27:39.024559  363504 cri.go:89] found id: "0bcd4d507bd4ada1761081111aa585e0302721ff2aa31a88b8a4ed23ef769c46"
	I1213 10:27:39.024570  363504 cri.go:89] found id: "9df3579774fb7e75da64c468015d647c1c846c2c3a3661e9cad4e7625c077819"
	I1213 10:27:39.024574  363504 cri.go:89] found id: "f8f4b2d0d0ca01cb0298f234ed98f38bb35db421dd1ee0ecfa35f42af8a048ea"
	I1213 10:27:39.024583  363504 cri.go:89] found id: "82ec4f0d273933dd1ab10c4541bc8ece9fcf638d0fadf7561a9a044e0d84b3e3"
	I1213 10:27:39.024600  363504 cri.go:89] found id: "46a1e5bc6867179fb782aaa1b961e54bb568eba367df5af5c2b1a32cc3432bcf"
	I1213 10:27:39.024604  363504 cri.go:89] found id: "abd7fd4640572a8d631c9a4b53b0b54dbb8e15303b0aad7c22abe4c2fd31d2f9"
	I1213 10:27:39.024607  363504 cri.go:89] found id: "ca06350334c8245e57a53c1956fea31819d2cec020bfc2d72fdf601430141c8e"
	I1213 10:27:39.024612  363504 cri.go:89] found id: "d5c5cc43186b72930aa32f3cf24a96d8bf357cebf0358db4413edf761499d0af"
	I1213 10:27:39.024616  363504 cri.go:89] found id: "cc0f178df84bbe390a441a840f219d69e66c7fd3620de6752d3ee094c40cdd59"
	I1213 10:27:39.024621  363504 cri.go:89] found id: "40451bec4cc2625dade53eb6c1f0778cc9665d75785787a901b2ca8fe63f61db"
	I1213 10:27:39.024624  363504 cri.go:89] found id: "ca309cac66452ff14d4cade2b7a47b20ec31fc85df9461959e22811849d21fec"
	I1213 10:27:39.024627  363504 cri.go:89] found id: "051e9f414ee6e9ab31fd97afb8184da1ef222b1c4f7dd9b0735f3e6282f04624"
	I1213 10:27:39.024636  363504 cri.go:89] found id: "cf448155f622a981b7e826d6176c5a79e1d0b75a3b353485a3e5065aa49ad951"
	I1213 10:27:39.024642  363504 cri.go:89] found id: "76b7938d7fbe387df34dbe103158724b60d8446770ffc2084ed0c6d9dcca5419"
	I1213 10:27:39.024649  363504 cri.go:89] found id: "fe380896f1e4da12ca13e0f570ceba0fba5d8edef70e3de8c10f5159b3c36a8d"
	I1213 10:27:39.024652  363504 cri.go:89] found id: ""
	I1213 10:27:39.024749  363504 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:27:39.040068  363504 out.go:203] 
	W1213 10:27:39.043094  363504 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:27:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:27:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:27:39.043124  363504 out.go:285] * 
	* 
	W1213 10:27:39.049196  363504 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:27:39.052128  363504 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-543946 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-543946
helpers_test.go:244: (dbg) docker inspect addons-543946:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "771f4b2573d1af3192d178e32d39d9fa5a4476bb767a301f6f5b8bfb5a73f1ef",
	        "Created": "2025-12-13T10:25:39.465172428Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 357716,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:25:39.530052869Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/771f4b2573d1af3192d178e32d39d9fa5a4476bb767a301f6f5b8bfb5a73f1ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/771f4b2573d1af3192d178e32d39d9fa5a4476bb767a301f6f5b8bfb5a73f1ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/771f4b2573d1af3192d178e32d39d9fa5a4476bb767a301f6f5b8bfb5a73f1ef/hosts",
	        "LogPath": "/var/lib/docker/containers/771f4b2573d1af3192d178e32d39d9fa5a4476bb767a301f6f5b8bfb5a73f1ef/771f4b2573d1af3192d178e32d39d9fa5a4476bb767a301f6f5b8bfb5a73f1ef-json.log",
	        "Name": "/addons-543946",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-543946:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-543946",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "771f4b2573d1af3192d178e32d39d9fa5a4476bb767a301f6f5b8bfb5a73f1ef",
	                "LowerDir": "/var/lib/docker/overlay2/5f2151df7cdf7bf89df314b1fbdcc90c9e3dd13aff68c767d933ee29b7c8ed75-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f2151df7cdf7bf89df314b1fbdcc90c9e3dd13aff68c767d933ee29b7c8ed75/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f2151df7cdf7bf89df314b1fbdcc90c9e3dd13aff68c767d933ee29b7c8ed75/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f2151df7cdf7bf89df314b1fbdcc90c9e3dd13aff68c767d933ee29b7c8ed75/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-543946",
	                "Source": "/var/lib/docker/volumes/addons-543946/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-543946",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-543946",
	                "name.minikube.sigs.k8s.io": "addons-543946",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3f5bc84306c158144616c677fe328bbfd36130bbf7da448e6a93d38bc5d815ac",
	            "SandboxKey": "/var/run/docker/netns/3f5bc84306c1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-543946": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:66:63:5f:ed:c7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "71fa05f4e527f76206845e71e9db32ded44f1cd6c1b919bffa94bb8f1644d952",
	                    "EndpointID": "324c7b3d8ec751a8f296d564483d83e6b9a4d29c6770af9def0a220bd30cdd5e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-543946",
	                        "771f4b2573d1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-543946 -n addons-543946
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-543946 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-543946 logs -n 25: (1.434221922s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-228427 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-228427   │ jenkins │ v1.37.0 │ 13 Dec 25 10:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ delete  │ -p download-only-228427                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-228427   │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ start   │ -o=json --download-only -p download-only-130157 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-130157   │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ delete  │ -p download-only-130157                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-130157   │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ start   │ -o=json --download-only -p download-only-963520 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-963520   │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ delete  │ -p download-only-963520                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-963520   │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ delete  │ -p download-only-228427                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-228427   │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ delete  │ -p download-only-130157                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-130157   │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ delete  │ -p download-only-963520                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-963520   │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ start   │ --download-only -p download-docker-135245 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-135245 │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │                     │
	│ delete  │ -p download-docker-135245                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-135245 │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ start   │ --download-only -p binary-mirror-392613 --alsologtostderr --binary-mirror http://127.0.0.1:41447 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-392613   │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │                     │
	│ delete  │ -p binary-mirror-392613                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-392613   │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ addons  │ disable dashboard -p addons-543946                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │                     │
	│ addons  │ enable dashboard -p addons-543946                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │                     │
	│ start   │ -p addons-543946 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:27 UTC │
	│ addons  │ addons-543946 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │                     │
	│ addons  │ addons-543946 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │                     │
	│ addons  │ enable headlamp -p addons-543946 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-543946          │ jenkins │ v1.37.0 │ 13 Dec 25 10:27 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:25:14
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:25:14.468964  357320 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:25:14.469113  357320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:25:14.469147  357320 out.go:374] Setting ErrFile to fd 2...
	I1213 10:25:14.469158  357320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:25:14.469429  357320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:25:14.469923  357320 out.go:368] Setting JSON to false
	I1213 10:25:14.470739  357320 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7667,"bootTime":1765613848,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:25:14.470809  357320 start.go:143] virtualization:  
	I1213 10:25:14.474161  357320 out.go:179] * [addons-543946] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:25:14.478016  357320 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:25:14.478120  357320 notify.go:221] Checking for updates...
	I1213 10:25:14.483785  357320 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:25:14.486581  357320 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:25:14.489483  357320 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 10:25:14.492244  357320 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:25:14.495072  357320 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:25:14.498158  357320 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:25:14.524603  357320 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:25:14.524730  357320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:25:14.599845  357320 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 10:25:14.590760037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:25:14.599951  357320 docker.go:319] overlay module found
	I1213 10:25:14.603104  357320 out.go:179] * Using the docker driver based on user configuration
	I1213 10:25:14.606007  357320 start.go:309] selected driver: docker
	I1213 10:25:14.606026  357320 start.go:927] validating driver "docker" against <nil>
	I1213 10:25:14.606040  357320 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:25:14.606768  357320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:25:14.663116  357320 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 10:25:14.654057643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:25:14.663281  357320 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:25:14.663559  357320 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:25:14.666501  357320 out.go:179] * Using Docker driver with root privileges
	I1213 10:25:14.669244  357320 cni.go:84] Creating CNI manager for ""
	I1213 10:25:14.669313  357320 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:25:14.669326  357320 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 10:25:14.669412  357320 start.go:353] cluster config:
	{Name:addons-543946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-543946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1213 10:25:14.674299  357320 out.go:179] * Starting "addons-543946" primary control-plane node in "addons-543946" cluster
	I1213 10:25:14.677120  357320 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:25:14.679984  357320 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:25:14.682759  357320 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 10:25:14.682811  357320 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 10:25:14.682826  357320 cache.go:65] Caching tarball of preloaded images
	I1213 10:25:14.682854  357320 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:25:14.682920  357320 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:25:14.682931  357320 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 10:25:14.683294  357320 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/config.json ...
	I1213 10:25:14.683328  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/config.json: {Name:mk5b74fbe0050f60fa211ab2c491db2cebc68da2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:14.698697  357320 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 10:25:14.698837  357320 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 10:25:14.698856  357320 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1213 10:25:14.698861  357320 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1213 10:25:14.698868  357320 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1213 10:25:14.698873  357320 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from local cache
	I1213 10:25:32.594892  357320 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from cached tarball
	I1213 10:25:32.594950  357320 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:25:32.594992  357320 start.go:360] acquireMachinesLock for addons-543946: {Name:mk28b673a92918c927bb67ea3cd59db53631e327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:25:32.595112  357320 start.go:364] duration metric: took 94.861µs to acquireMachinesLock for "addons-543946"
	I1213 10:25:32.595143  357320 start.go:93] Provisioning new machine with config: &{Name:addons-543946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-543946 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:25:32.595215  357320 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:25:32.598702  357320 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1213 10:25:32.598957  357320 start.go:159] libmachine.API.Create for "addons-543946" (driver="docker")
	I1213 10:25:32.599004  357320 client.go:173] LocalClient.Create starting
	I1213 10:25:32.599132  357320 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem
	I1213 10:25:32.656508  357320 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem
	I1213 10:25:33.005921  357320 cli_runner.go:164] Run: docker network inspect addons-543946 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:25:33.026544  357320 cli_runner.go:211] docker network inspect addons-543946 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:25:33.026645  357320 network_create.go:284] running [docker network inspect addons-543946] to gather additional debugging logs...
	I1213 10:25:33.026668  357320 cli_runner.go:164] Run: docker network inspect addons-543946
	W1213 10:25:33.043121  357320 cli_runner.go:211] docker network inspect addons-543946 returned with exit code 1
	I1213 10:25:33.043148  357320 network_create.go:287] error running [docker network inspect addons-543946]: docker network inspect addons-543946: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-543946 not found
	I1213 10:25:33.043170  357320 network_create.go:289] output of [docker network inspect addons-543946]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-543946 not found
	
	** /stderr **
	I1213 10:25:33.043270  357320 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:25:33.058823  357320 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a51390}
	I1213 10:25:33.058863  357320 network_create.go:124] attempt to create docker network addons-543946 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 10:25:33.058920  357320 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-543946 addons-543946
	I1213 10:25:33.118806  357320 network_create.go:108] docker network addons-543946 192.168.49.0/24 created
	I1213 10:25:33.118838  357320 kic.go:121] calculated static IP "192.168.49.2" for the "addons-543946" container
	I1213 10:25:33.118927  357320 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:25:33.135606  357320 cli_runner.go:164] Run: docker volume create addons-543946 --label name.minikube.sigs.k8s.io=addons-543946 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:25:33.152664  357320 oci.go:103] Successfully created a docker volume addons-543946
	I1213 10:25:33.152768  357320 cli_runner.go:164] Run: docker run --rm --name addons-543946-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-543946 --entrypoint /usr/bin/test -v addons-543946:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:25:35.445039  357320 cli_runner.go:217] Completed: docker run --rm --name addons-543946-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-543946 --entrypoint /usr/bin/test -v addons-543946:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (2.292225539s)
	I1213 10:25:35.445070  357320 oci.go:107] Successfully prepared a docker volume addons-543946
	I1213 10:25:35.445113  357320 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 10:25:35.445130  357320 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:25:35.445206  357320 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-543946:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 10:25:39.385245  357320 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-543946:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.939998045s)
	I1213 10:25:39.385280  357320 kic.go:203] duration metric: took 3.940147305s to extract preloaded images to volume ...
	W1213 10:25:39.385435  357320 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 10:25:39.385548  357320 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 10:25:39.450481  357320 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-543946 --name addons-543946 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-543946 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-543946 --network addons-543946 --ip 192.168.49.2 --volume addons-543946:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 10:25:39.749172  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Running}}
	I1213 10:25:39.770800  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:25:39.793041  357320 cli_runner.go:164] Run: docker exec addons-543946 stat /var/lib/dpkg/alternatives/iptables
	I1213 10:25:39.840908  357320 oci.go:144] the created container "addons-543946" has a running status.
	I1213 10:25:39.840935  357320 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa...
	I1213 10:25:40.027022  357320 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 10:25:40.053560  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:25:40.074585  357320 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 10:25:40.074608  357320 kic_runner.go:114] Args: [docker exec --privileged addons-543946 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 10:25:40.149438  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:25:40.169585  357320 machine.go:94] provisionDockerMachine start ...
	I1213 10:25:40.169673  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:25:40.194565  357320 main.go:143] libmachine: Using SSH client type: native
	I1213 10:25:40.194885  357320 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1213 10:25:40.194900  357320 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:25:40.195481  357320 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 10:25:43.346827  357320 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-543946
	
	I1213 10:25:43.346848  357320 ubuntu.go:182] provisioning hostname "addons-543946"
	I1213 10:25:43.346925  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:25:43.362973  357320 main.go:143] libmachine: Using SSH client type: native
	I1213 10:25:43.363401  357320 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1213 10:25:43.363416  357320 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-543946 && echo "addons-543946" | sudo tee /etc/hostname
	I1213 10:25:43.525151  357320 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-543946
	
	I1213 10:25:43.525230  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:25:43.542888  357320 main.go:143] libmachine: Using SSH client type: native
	I1213 10:25:43.543200  357320 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1213 10:25:43.543221  357320 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-543946' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-543946/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-543946' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:25:43.691780  357320 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:25:43.691812  357320 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 10:25:43.691844  357320 ubuntu.go:190] setting up certificates
	I1213 10:25:43.691868  357320 provision.go:84] configureAuth start
	I1213 10:25:43.691938  357320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-543946
	I1213 10:25:43.709725  357320 provision.go:143] copyHostCerts
	I1213 10:25:43.709807  357320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 10:25:43.709941  357320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 10:25:43.710017  357320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 10:25:43.710073  357320 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.addons-543946 san=[127.0.0.1 192.168.49.2 addons-543946 localhost minikube]
	I1213 10:25:44.035865  357320 provision.go:177] copyRemoteCerts
	I1213 10:25:44.035938  357320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:25:44.035979  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:25:44.052955  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:25:44.155120  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:25:44.173298  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 10:25:44.190575  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1213 10:25:44.208236  357320 provision.go:87] duration metric: took 516.350062ms to configureAuth
	I1213 10:25:44.208308  357320 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:25:44.208530  357320 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:25:44.208647  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:25:44.225447  357320 main.go:143] libmachine: Using SSH client type: native
	I1213 10:25:44.225772  357320 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1213 10:25:44.225793  357320 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:25:44.526873  357320 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:25:44.526949  357320 machine.go:97] duration metric: took 4.357340475s to provisionDockerMachine
	I1213 10:25:44.526975  357320 client.go:176] duration metric: took 11.927964579s to LocalClient.Create
	I1213 10:25:44.527006  357320 start.go:167] duration metric: took 11.928050964s to libmachine.API.Create "addons-543946"
	I1213 10:25:44.527026  357320 start.go:293] postStartSetup for "addons-543946" (driver="docker")
	I1213 10:25:44.527060  357320 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:25:44.527146  357320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:25:44.527255  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:25:44.545053  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:25:44.647464  357320 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:25:44.650899  357320 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:25:44.650930  357320 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:25:44.650946  357320 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 10:25:44.651015  357320 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 10:25:44.651046  357320 start.go:296] duration metric: took 123.99285ms for postStartSetup
	I1213 10:25:44.651357  357320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-543946
	I1213 10:25:44.668786  357320 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/config.json ...
	I1213 10:25:44.669064  357320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:25:44.669112  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:25:44.685271  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:25:44.788779  357320 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:25:44.793756  357320 start.go:128] duration metric: took 12.198525799s to createHost
	I1213 10:25:44.793788  357320 start.go:83] releasing machines lock for "addons-543946", held for 12.198660726s
	I1213 10:25:44.793867  357320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-543946
	I1213 10:25:44.810450  357320 ssh_runner.go:195] Run: cat /version.json
	I1213 10:25:44.810512  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:25:44.810619  357320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:25:44.810676  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:25:44.837823  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:25:44.845128  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:25:44.943087  357320 ssh_runner.go:195] Run: systemctl --version
	I1213 10:25:45.039338  357320 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:25:45.100786  357320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:25:45.107357  357320 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:25:45.107487  357320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:25:45.145608  357320 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 10:25:45.145674  357320 start.go:496] detecting cgroup driver to use...
	I1213 10:25:45.145715  357320 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:25:45.145835  357320 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:25:45.168659  357320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:25:45.186885  357320 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:25:45.187143  357320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:25:45.215981  357320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:25:45.243820  357320 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:25:45.388850  357320 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:25:45.514000  357320 docker.go:234] disabling docker service ...
	I1213 10:25:45.514069  357320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:25:45.535779  357320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:25:45.548933  357320 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:25:45.668606  357320 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:25:45.786652  357320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:25:45.799607  357320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:25:45.813905  357320 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:25:45.814004  357320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:25:45.822663  357320 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:25:45.822765  357320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:25:45.831646  357320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:25:45.840368  357320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:25:45.849462  357320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:25:45.857790  357320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:25:45.866853  357320 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:25:45.880512  357320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:25:45.889274  357320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:25:45.896691  357320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:25:45.904448  357320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:25:46.011550  357320 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:25:46.187122  357320 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:25:46.187208  357320 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:25:46.190936  357320 start.go:564] Will wait 60s for crictl version
	I1213 10:25:46.191003  357320 ssh_runner.go:195] Run: which crictl
	I1213 10:25:46.194180  357320 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:25:46.221026  357320 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:25:46.221124  357320 ssh_runner.go:195] Run: crio --version
	I1213 10:25:46.250390  357320 ssh_runner.go:195] Run: crio --version
	I1213 10:25:46.280620  357320 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 10:25:46.283403  357320 cli_runner.go:164] Run: docker network inspect addons-543946 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:25:46.299585  357320 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:25:46.303238  357320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:25:46.312849  357320 kubeadm.go:884] updating cluster {Name:addons-543946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-543946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:25:46.312979  357320 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 10:25:46.313038  357320 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:25:46.345145  357320 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:25:46.345167  357320 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:25:46.345221  357320 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:25:46.382599  357320 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:25:46.382624  357320 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:25:46.382632  357320 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1213 10:25:46.382719  357320 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-543946 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-543946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:25:46.382804  357320 ssh_runner.go:195] Run: crio config
	I1213 10:25:46.437296  357320 cni.go:84] Creating CNI manager for ""
	I1213 10:25:46.437321  357320 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:25:46.437365  357320 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:25:46.437394  357320 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-543946 NodeName:addons-543946 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:25:46.437524  357320 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-543946"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:25:46.437600  357320 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 10:25:46.445581  357320 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:25:46.445654  357320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:25:46.454626  357320 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 10:25:46.467498  357320 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 10:25:46.480238  357320 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1213 10:25:46.492792  357320 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:25:46.496499  357320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:25:46.506054  357320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:25:46.611624  357320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:25:46.626683  357320 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946 for IP: 192.168.49.2
	I1213 10:25:46.626706  357320 certs.go:195] generating shared ca certs ...
	I1213 10:25:46.626722  357320 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:46.626913  357320 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 10:25:47.204768  357320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt ...
	I1213 10:25:47.204805  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt: {Name:mk40527cd6a78d6865530eda3515d7d66bc3735f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:47.205005  357320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key ...
	I1213 10:25:47.205018  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key: {Name:mkedfc6b0347ec89e97cad1eedd0013496b4a5aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:47.205107  357320 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 10:25:47.439029  357320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt ...
	I1213 10:25:47.439057  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt: {Name:mk9420c8b224fa9f09e2c198603b8e1c2c54b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:47.439237  357320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key ...
	I1213 10:25:47.439250  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key: {Name:mk736a395033f19b2378469d93d84caf4d9f9094 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:47.439331  357320 certs.go:257] generating profile certs ...
	I1213 10:25:47.439393  357320 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.key
	I1213 10:25:47.439410  357320 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt with IP's: []
	I1213 10:25:47.565758  357320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt ...
	I1213 10:25:47.565792  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: {Name:mkc757bbee111d3d94e08f102e6b9051de83f356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:47.565986  357320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.key ...
	I1213 10:25:47.566000  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.key: {Name:mk1b27f6c7da454226a68ac3488e27ecfef1f4a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:47.566088  357320 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.key.736b28ae
	I1213 10:25:47.566113  357320 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.crt.736b28ae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 10:25:47.757570  357320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.crt.736b28ae ...
	I1213 10:25:47.757603  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.crt.736b28ae: {Name:mk9cb03e9bf28afc834243a7959df21e4d0904d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:47.757782  357320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.key.736b28ae ...
	I1213 10:25:47.757796  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.key.736b28ae: {Name:mkaf5350f7e2fa2bca7302c044ea91647c8e6a27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:47.757882  357320 certs.go:382] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.crt.736b28ae -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.crt
	I1213 10:25:47.757961  357320 certs.go:386] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.key.736b28ae -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.key
	I1213 10:25:47.758016  357320 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/proxy-client.key
	I1213 10:25:47.758036  357320 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/proxy-client.crt with IP's: []
	I1213 10:25:47.952145  357320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/proxy-client.crt ...
	I1213 10:25:47.952173  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/proxy-client.crt: {Name:mk378eb64df056a2196f869ba6c51c0c990ec56f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:47.952379  357320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/proxy-client.key ...
	I1213 10:25:47.952394  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/proxy-client.key: {Name:mke175128fe2051803c7e5af81e699c14acdccba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:47.952590  357320 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:25:47.952634  357320 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:25:47.952665  357320 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:25:47.952699  357320 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 10:25:47.953323  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:25:47.972174  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:25:47.989840  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:25:48.008028  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:25:48.029208  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 10:25:48.048588  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:25:48.067664  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:25:48.086397  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 10:25:48.104915  357320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:25:48.124398  357320 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:25:48.138229  357320 ssh_runner.go:195] Run: openssl version
	I1213 10:25:48.144725  357320 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:25:48.152484  357320 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:25:48.160202  357320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:25:48.164143  357320 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:25:48.164210  357320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:25:48.205363  357320 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:25:48.212881  357320 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 10:25:48.220348  357320 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:25:48.224037  357320 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 10:25:48.224089  357320 kubeadm.go:401] StartCluster: {Name:addons-543946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-543946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:25:48.224176  357320 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:25:48.224247  357320 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:25:48.251200  357320 cri.go:89] found id: ""
	I1213 10:25:48.251342  357320 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:25:48.259403  357320 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:25:48.267355  357320 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:25:48.267424  357320 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:25:48.279121  357320 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:25:48.279141  357320 kubeadm.go:158] found existing configuration files:
	
	I1213 10:25:48.279195  357320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 10:25:48.288126  357320 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:25:48.288190  357320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:25:48.296205  357320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 10:25:48.304944  357320 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:25:48.305008  357320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:25:48.312915  357320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 10:25:48.321585  357320 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:25:48.321648  357320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:25:48.330472  357320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 10:25:48.338316  357320 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:25:48.338418  357320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:25:48.346331  357320 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:25:48.386720  357320 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 10:25:48.386795  357320 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:25:48.411758  357320 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:25:48.411860  357320 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:25:48.411947  357320 kubeadm.go:319] OS: Linux
	I1213 10:25:48.412031  357320 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:25:48.412123  357320 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:25:48.412183  357320 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:25:48.412239  357320 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:25:48.412293  357320 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:25:48.412364  357320 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:25:48.412459  357320 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:25:48.412551  357320 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:25:48.412643  357320 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:25:48.479655  357320 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:25:48.479859  357320 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:25:48.479992  357320 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:25:48.489680  357320 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:25:48.496275  357320 out.go:252]   - Generating certificates and keys ...
	I1213 10:25:48.496389  357320 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:25:48.496473  357320 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:25:49.351627  357320 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 10:25:49.740615  357320 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 10:25:50.062262  357320 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 10:25:50.332710  357320 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 10:25:50.876472  357320 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 10:25:50.876812  357320 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-543946 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 10:25:51.864564  357320 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 10:25:51.864940  357320 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-543946 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 10:25:53.530037  357320 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 10:25:53.764404  357320 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 10:25:54.122075  357320 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 10:25:54.122354  357320 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:25:54.715300  357320 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:25:55.724056  357320 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:25:56.134010  357320 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:25:56.555160  357320 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:25:56.972558  357320 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:25:56.973516  357320 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:25:56.976881  357320 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:25:56.980285  357320 out.go:252]   - Booting up control plane ...
	I1213 10:25:56.980389  357320 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:25:56.980465  357320 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:25:56.981161  357320 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:25:56.996591  357320 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:25:56.996965  357320 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:25:57.005948  357320 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:25:57.006050  357320 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:25:57.006088  357320 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:25:57.140027  357320 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:25:57.140151  357320 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:25:58.638858  357320 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501736903s
	I1213 10:25:58.645601  357320 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 10:25:58.645702  357320 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1213 10:25:58.645792  357320 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 10:25:58.645871  357320 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 10:26:01.360702  357320 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.714763995s
	I1213 10:26:02.990623  357320 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.345025068s
	I1213 10:26:04.647832  357320 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002027628s
	I1213 10:26:04.682682  357320 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 10:26:04.697532  357320 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 10:26:04.712309  357320 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 10:26:04.712534  357320 kubeadm.go:319] [mark-control-plane] Marking the node addons-543946 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 10:26:04.724663  357320 kubeadm.go:319] [bootstrap-token] Using token: gouzdj.5i63bgisvk0e7a0d
	I1213 10:26:04.727724  357320 out.go:252]   - Configuring RBAC rules ...
	I1213 10:26:04.727853  357320 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 10:26:04.732435  357320 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 10:26:04.742183  357320 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 10:26:04.746372  357320 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 10:26:04.750568  357320 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 10:26:04.754837  357320 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 10:26:05.054398  357320 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 10:26:05.487328  357320 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 10:26:06.054936  357320 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 10:26:06.056340  357320 kubeadm.go:319] 
	I1213 10:26:06.056416  357320 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 10:26:06.056426  357320 kubeadm.go:319] 
	I1213 10:26:06.056500  357320 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 10:26:06.056508  357320 kubeadm.go:319] 
	I1213 10:26:06.056532  357320 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 10:26:06.056591  357320 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 10:26:06.056643  357320 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 10:26:06.056651  357320 kubeadm.go:319] 
	I1213 10:26:06.056702  357320 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 10:26:06.056711  357320 kubeadm.go:319] 
	I1213 10:26:06.056756  357320 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 10:26:06.056764  357320 kubeadm.go:319] 
	I1213 10:26:06.056814  357320 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 10:26:06.056889  357320 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 10:26:06.056957  357320 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 10:26:06.056984  357320 kubeadm.go:319] 
	I1213 10:26:06.057069  357320 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 10:26:06.057146  357320 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 10:26:06.057154  357320 kubeadm.go:319] 
	I1213 10:26:06.057234  357320 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token gouzdj.5i63bgisvk0e7a0d \
	I1213 10:26:06.057339  357320 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a3798e8f4868c7e4585b4327b4f0565e5125112465fbf26ae2f7c9b7fec5e169 \
	I1213 10:26:06.057364  357320 kubeadm.go:319] 	--control-plane 
	I1213 10:26:06.057370  357320 kubeadm.go:319] 
	I1213 10:26:06.057451  357320 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 10:26:06.057457  357320 kubeadm.go:319] 
	I1213 10:26:06.057535  357320 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token gouzdj.5i63bgisvk0e7a0d \
	I1213 10:26:06.057636  357320 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a3798e8f4868c7e4585b4327b4f0565e5125112465fbf26ae2f7c9b7fec5e169 
	I1213 10:26:06.061027  357320 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 10:26:06.061297  357320 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:26:06.061421  357320 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:26:06.061436  357320 cni.go:84] Creating CNI manager for ""
	I1213 10:26:06.061444  357320 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:26:06.064800  357320 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 10:26:06.067883  357320 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 10:26:06.071940  357320 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 10:26:06.071965  357320 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1213 10:26:06.087252  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 10:26:06.381023  357320 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 10:26:06.381145  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:26:06.381211  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-543946 minikube.k8s.io/updated_at=2025_12_13T10_26_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b minikube.k8s.io/name=addons-543946 minikube.k8s.io/primary=true
	I1213 10:26:06.513048  357320 ops.go:34] apiserver oom_adj: -16
	I1213 10:26:06.524875  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:26:07.025575  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:26:07.525003  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:26:08.024981  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:26:08.524942  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:26:09.025082  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:26:09.525042  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:26:10.025671  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:26:10.525267  357320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 10:26:10.681837  357320 kubeadm.go:1114] duration metric: took 4.300736591s to wait for elevateKubeSystemPrivileges
	I1213 10:26:10.681865  357320 kubeadm.go:403] duration metric: took 22.457782572s to StartCluster
	I1213 10:26:10.681882  357320 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:26:10.681996  357320 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:26:10.682356  357320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:26:10.682532  357320 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:26:10.682724  357320 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 10:26:10.682975  357320 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:26:10.683008  357320 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1213 10:26:10.683089  357320 addons.go:70] Setting yakd=true in profile "addons-543946"
	I1213 10:26:10.683102  357320 addons.go:239] Setting addon yakd=true in "addons-543946"
	I1213 10:26:10.683122  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.683606  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.684095  357320 addons.go:70] Setting metrics-server=true in profile "addons-543946"
	I1213 10:26:10.684112  357320 addons.go:239] Setting addon metrics-server=true in "addons-543946"
	I1213 10:26:10.684134  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.684557  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.684704  357320 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-543946"
	I1213 10:26:10.684732  357320 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-543946"
	I1213 10:26:10.684757  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.685206  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.688188  357320 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-543946"
	I1213 10:26:10.691278  357320 addons.go:70] Setting cloud-spanner=true in profile "addons-543946"
	I1213 10:26:10.692684  357320 addons.go:239] Setting addon cloud-spanner=true in "addons-543946"
	I1213 10:26:10.692717  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.693185  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.693369  357320 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-543946"
	I1213 10:26:10.693460  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.691303  357320 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-543946"
	I1213 10:26:10.694677  357320 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-543946"
	I1213 10:26:10.694702  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.695096  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.695938  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.691313  357320 out.go:179] * Verifying Kubernetes components...
	I1213 10:26:10.691341  357320 addons.go:70] Setting default-storageclass=true in profile "addons-543946"
	I1213 10:26:10.720469  357320 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-543946"
	I1213 10:26:10.720855  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.726394  357320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:26:10.691355  357320 addons.go:70] Setting gcp-auth=true in profile "addons-543946"
	I1213 10:26:10.726808  357320 mustload.go:66] Loading cluster: addons-543946
	I1213 10:26:10.727012  357320 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:26:10.727272  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.691364  357320 addons.go:70] Setting ingress=true in profile "addons-543946"
	I1213 10:26:10.740157  357320 addons.go:239] Setting addon ingress=true in "addons-543946"
	I1213 10:26:10.740205  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.740689  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.691370  357320 addons.go:70] Setting ingress-dns=true in profile "addons-543946"
	I1213 10:26:10.764840  357320 addons.go:239] Setting addon ingress-dns=true in "addons-543946"
	I1213 10:26:10.764890  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.765377  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.791284  357320 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1213 10:26:10.691376  357320 addons.go:70] Setting inspektor-gadget=true in profile "addons-543946"
	I1213 10:26:10.691613  357320 addons.go:70] Setting registry=true in profile "addons-543946"
	I1213 10:26:10.691621  357320 addons.go:70] Setting registry-creds=true in profile "addons-543946"
	I1213 10:26:10.691632  357320 addons.go:70] Setting storage-provisioner=true in profile "addons-543946"
	I1213 10:26:10.691638  357320 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-543946"
	I1213 10:26:10.691644  357320 addons.go:70] Setting volcano=true in profile "addons-543946"
	I1213 10:26:10.691649  357320 addons.go:70] Setting volumesnapshots=true in profile "addons-543946"
	I1213 10:26:10.797984  357320 addons.go:239] Setting addon volumesnapshots=true in "addons-543946"
	I1213 10:26:10.798060  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.806530  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.814383  357320 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1213 10:26:10.814435  357320 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 10:26:10.814446  357320 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 10:26:10.814508  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:10.843915  357320 addons.go:239] Setting addon inspektor-gadget=true in "addons-543946"
	I1213 10:26:10.844012  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.844522  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.844844  357320 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 10:26:10.844858  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1213 10:26:10.844901  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:10.862644  357320 addons.go:239] Setting addon registry=true in "addons-543946"
	I1213 10:26:10.862744  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.863262  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.879356  357320 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1213 10:26:10.882491  357320 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1213 10:26:10.882520  357320 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1213 10:26:10.882597  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:10.895612  357320 addons.go:239] Setting addon registry-creds=true in "addons-543946"
	I1213 10:26:10.895676  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.896227  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.904490  357320 addons.go:239] Setting addon storage-provisioner=true in "addons-543946"
	I1213 10:26:10.904537  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.905059  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.927812  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.928900  357320 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-543946"
	I1213 10:26:10.929213  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.930113  357320 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1213 10:26:10.930269  357320 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1213 10:26:10.953194  357320 addons.go:239] Setting addon volcano=true in "addons-543946"
	I1213 10:26:10.953249  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.953739  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.960169  357320 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1213 10:26:10.966192  357320 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1213 10:26:10.974908  357320 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1213 10:26:10.980089  357320 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1213 10:26:10.981701  357320 addons.go:239] Setting addon default-storageclass=true in "addons-543946"
	I1213 10:26:10.981738  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:10.982138  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:10.987157  357320 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1213 10:26:10.987179  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1213 10:26:10.987243  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.023794  357320 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 10:26:11.023818  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1213 10:26:11.023884  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.070613  357320 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1213 10:26:11.091585  357320 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1213 10:26:11.098421  357320 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1213 10:26:11.106889  357320 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1213 10:26:11.107069  357320 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1213 10:26:11.107971  357320 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1213 10:26:11.110121  357320 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 10:26:11.110145  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1213 10:26:11.110204  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.131979  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.132833  357320 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1213 10:26:11.139763  357320 out.go:179]   - Using image docker.io/registry:3.0.0
	I1213 10:26:11.139895  357320 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1213 10:26:11.142739  357320 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1213 10:26:11.143912  357320 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1213 10:26:11.143963  357320 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1213 10:26:11.144781  357320 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1213 10:26:11.144793  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1213 10:26:11.144851  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.145027  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.154286  357320 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:26:11.155305  357320 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1213 10:26:11.155322  357320 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1213 10:26:11.155388  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.155654  357320 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1213 10:26:11.155668  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1213 10:26:11.155707  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.172967  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.173726  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.174130  357320 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:26:11.174141  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:26:11.174195  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.175747  357320 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 10:26:11.180973  357320 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 10:26:11.187668  357320 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 10:26:11.187701  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1213 10:26:11.187766  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.211055  357320 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1213 10:26:11.219969  357320 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-543946"
	I1213 10:26:11.220015  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:11.220425  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:11.222791  357320 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 10:26:11.222819  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1213 10:26:11.222880  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.266832  357320 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:26:11.266853  357320 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:26:11.266911  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	W1213 10:26:11.273062  357320 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1213 10:26:11.288343  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.302632  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.336848  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.343810  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.363675  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.398834  357320 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 10:26:11.399215  357320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:26:11.407761  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.418478  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.427003  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.435686  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.436863  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.442711  357320 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1213 10:26:11.445695  357320 out.go:179]   - Using image docker.io/busybox:stable
	I1213 10:26:11.448047  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	W1213 10:26:11.448592  357320 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1213 10:26:11.448615  357320 retry.go:31] will retry after 361.072928ms: ssh: handshake failed: EOF
	W1213 10:26:11.448651  357320 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1213 10:26:11.448657  357320 retry.go:31] will retry after 252.300133ms: ssh: handshake failed: EOF
	I1213 10:26:11.448813  357320 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 10:26:11.448825  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1213 10:26:11.448880  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:11.477613  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:11.634678  357320 node_ready.go:35] waiting up to 6m0s for node "addons-543946" to be "Ready" ...
	I1213 10:26:11.918381  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 10:26:11.932500  357320 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1213 10:26:11.932577  357320 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1213 10:26:12.044036  357320 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 10:26:12.044097  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1213 10:26:12.135634  357320 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1213 10:26:12.135711  357320 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1213 10:26:12.243769  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 10:26:12.278423  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1213 10:26:12.284695  357320 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 10:26:12.284771  357320 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 10:26:12.420991  357320 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1213 10:26:12.421019  357320 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1213 10:26:12.435709  357320 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1213 10:26:12.435737  357320 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1213 10:26:12.463616  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 10:26:12.528560  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1213 10:26:12.534863  357320 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1213 10:26:12.534889  357320 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1213 10:26:12.560606  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 10:26:12.579799  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 10:26:12.587299  357320 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1213 10:26:12.587325  357320 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1213 10:26:12.588879  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 10:26:12.612020  357320 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 10:26:12.612058  357320 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 10:26:12.715274  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:26:12.744114  357320 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1213 10:26:12.744184  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1213 10:26:12.837633  357320 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1213 10:26:12.837656  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1213 10:26:12.839663  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1213 10:26:12.909186  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:26:12.941000  357320 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1213 10:26:12.941033  357320 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1213 10:26:12.961032  357320 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1213 10:26:12.961064  357320 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1213 10:26:13.018907  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 10:26:13.176800  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1213 10:26:13.207996  357320 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1213 10:26:13.208032  357320 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1213 10:26:13.317285  357320 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1213 10:26:13.317327  357320 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1213 10:26:13.601366  357320 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1213 10:26:13.601409  357320 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1213 10:26:13.635584  357320 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1213 10:26:13.635655  357320 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	W1213 10:26:13.638229  357320 node_ready.go:57] node "addons-543946" has "Ready":"False" status (will retry)
	I1213 10:26:13.862214  357320 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 10:26:13.862289  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1213 10:26:14.193816  357320 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1213 10:26:14.193902  357320 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1213 10:26:14.250643  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 10:26:14.447126  357320 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.048208521s)
	I1213 10:26:14.447272  357320 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1213 10:26:14.447246  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.5287896s)
	I1213 10:26:14.661440  357320 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1213 10:26:14.661516  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1213 10:26:14.970327  357320 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-543946" context rescaled to 1 replicas
	I1213 10:26:15.006643  357320 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1213 10:26:15.006684  357320 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1213 10:26:15.210099  357320 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1213 10:26:15.210124  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1213 10:26:15.390775  357320 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1213 10:26:15.390846  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1213 10:26:15.638187  357320 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 10:26:15.638261  357320 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	W1213 10:26:15.659167  357320 node_ready.go:57] node "addons-543946" has "Ready":"False" status (will retry)
	I1213 10:26:15.773136  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 10:26:16.722494  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.478638329s)
	I1213 10:26:16.722602  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.44410289s)
	I1213 10:26:16.722635  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.258997514s)
	I1213 10:26:17.255259  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.726660971s)
	I1213 10:26:17.255343  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.694711079s)
	I1213 10:26:18.110755  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.530917898s)
	I1213 10:26:18.111277  357320 addons.go:495] Verifying addon ingress=true in "addons-543946"
	I1213 10:26:18.110872  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.521963683s)
	I1213 10:26:18.110928  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.39562493s)
	I1213 10:26:18.110964  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.271279069s)
	I1213 10:26:18.111407  357320 addons.go:495] Verifying addon registry=true in "addons-543946"
	I1213 10:26:18.110980  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.201769172s)
	I1213 10:26:18.111028  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.092094668s)
	I1213 10:26:18.112052  357320 addons.go:495] Verifying addon metrics-server=true in "addons-543946"
	I1213 10:26:18.111073  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.934244748s)
	I1213 10:26:18.111145  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.860419722s)
	W1213 10:26:18.112159  357320 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 10:26:18.112175  357320 retry.go:31] will retry after 159.73683ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 10:26:18.115767  357320 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-543946 service yakd-dashboard -n yakd-dashboard
	
	I1213 10:26:18.115918  357320 out.go:179] * Verifying registry addon...
	I1213 10:26:18.115969  357320 out.go:179] * Verifying ingress addon...
	I1213 10:26:18.120381  357320 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1213 10:26:18.121299  357320 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1213 10:26:18.133504  357320 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 10:26:18.133524  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:18.134227  357320 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 10:26:18.134245  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:26:18.140907  357320 node_ready.go:57] node "addons-543946" has "Ready":"False" status (will retry)
	I1213 10:26:18.272859  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 10:26:18.461147  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.687889292s)
	I1213 10:26:18.461195  357320 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-543946"
	I1213 10:26:18.464149  357320 out.go:179] * Verifying csi-hostpath-driver addon...
	I1213 10:26:18.467926  357320 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1213 10:26:18.496627  357320 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 10:26:18.496662  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:18.625245  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:18.626592  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:18.698171  357320 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1213 10:26:18.698283  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:18.715174  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:18.832926  357320 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1213 10:26:18.846279  357320 addons.go:239] Setting addon gcp-auth=true in "addons-543946"
	I1213 10:26:18.846329  357320 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:26:18.846824  357320 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:26:18.864269  357320 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1213 10:26:18.864326  357320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:26:18.880384  357320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:26:18.971942  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:19.125013  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:19.125632  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:19.472250  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:19.624396  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:19.624608  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:19.973156  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:20.124367  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:20.124643  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:20.470907  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:20.624904  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:20.625165  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:26:20.638041  357320 node_ready.go:57] node "addons-543946" has "Ready":"False" status (will retry)
	I1213 10:26:20.971860  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:21.002146  357320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.729232917s)
	I1213 10:26:21.002186  357320 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.137873452s)
	I1213 10:26:21.006851  357320 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 10:26:21.011449  357320 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1213 10:26:21.014716  357320 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1213 10:26:21.014762  357320 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1213 10:26:21.030597  357320 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1213 10:26:21.030666  357320 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1213 10:26:21.045526  357320 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 10:26:21.045548  357320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1213 10:26:21.059066  357320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 10:26:21.130145  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:21.130779  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:21.473625  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:21.552078  357320 addons.go:495] Verifying addon gcp-auth=true in "addons-543946"
	I1213 10:26:21.554955  357320 out.go:179] * Verifying gcp-auth addon...
	I1213 10:26:21.559345  357320 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1213 10:26:21.573583  357320 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1213 10:26:21.573606  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:21.674426  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:21.674431  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:21.971187  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:22.063094  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:22.123667  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:22.124564  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:22.471602  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:22.564193  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:22.624335  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:22.624534  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1213 10:26:22.638294  357320 node_ready.go:57] node "addons-543946" has "Ready":"False" status (will retry)
	I1213 10:26:22.971838  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:23.062879  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:23.124000  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:23.125150  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:23.473509  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:23.562326  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:23.624179  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:23.624714  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:23.971416  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:24.062533  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:24.123309  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:24.124568  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:24.471631  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:24.563090  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:24.624392  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:24.624761  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1213 10:26:24.638418  357320 node_ready.go:57] node "addons-543946" has "Ready":"False" status (will retry)
	I1213 10:26:24.971270  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:25.063383  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:25.164119  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:25.164449  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:25.473341  357320 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 10:26:25.473365  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:25.618561  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:25.665011  357320 node_ready.go:49] node "addons-543946" is "Ready"
	I1213 10:26:25.665041  357320 node_ready.go:38] duration metric: took 14.030277579s for node "addons-543946" to be "Ready" ...
	I1213 10:26:25.665056  357320 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:26:25.665115  357320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:26:25.685422  357320 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 10:26:25.685499  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:25.685674  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:25.696630  357320 api_server.go:72] duration metric: took 15.014055801s to wait for apiserver process to appear ...
	I1213 10:26:25.696710  357320 api_server.go:88] waiting for apiserver healthz status ...
	I1213 10:26:25.696746  357320 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1213 10:26:25.728286  357320 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1213 10:26:25.730210  357320 api_server.go:141] control plane version: v1.34.2
	I1213 10:26:25.730279  357320 api_server.go:131] duration metric: took 33.548613ms to wait for apiserver health ...
	I1213 10:26:25.730301  357320 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 10:26:25.789234  357320 system_pods.go:59] 19 kube-system pods found
	I1213 10:26:25.789334  357320 system_pods.go:61] "coredns-66bc5c9577-2h2qj" [d708bfcb-b562-4258-8a02-3496434b9d0f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:26:25.789355  357320 system_pods.go:61] "csi-hostpath-attacher-0" [6945d76f-4774-4e6a-bfe4-9967102c44ae] Pending
	I1213 10:26:25.789402  357320 system_pods.go:61] "csi-hostpath-resizer-0" [b5d0755c-9a98-4a4b-85d6-cde9012ab8d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 10:26:25.789506  357320 system_pods.go:61] "csi-hostpathplugin-j4gkx" [dc459b50-b68a-4fb1-a005-511c1f6644de] Pending
	I1213 10:26:25.789532  357320 system_pods.go:61] "etcd-addons-543946" [4cfdbfb8-1315-4335-a06b-7c55934ebfdd] Running
	I1213 10:26:25.789563  357320 system_pods.go:61] "kindnet-rjdb7" [fa5b3d77-68f9-4360-99ec-936116cfd80b] Running
	I1213 10:26:25.789585  357320 system_pods.go:61] "kube-apiserver-addons-543946" [a3c23361-35df-48b8-a5c2-b2a860c09121] Running
	I1213 10:26:25.789606  357320 system_pods.go:61] "kube-controller-manager-addons-543946" [029b6027-bd2d-4bc1-a36a-a61f7fdc09db] Running
	I1213 10:26:25.789645  357320 system_pods.go:61] "kube-ingress-dns-minikube" [f7b1db38-bdd3-4eec-ac48-c1307ffa281d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 10:26:25.789668  357320 system_pods.go:61] "kube-proxy-cmcs4" [191353c6-fc9a-4820-a0b4-3f621cd4b35b] Running
	I1213 10:26:25.789687  357320 system_pods.go:61] "kube-scheduler-addons-543946" [efaf06b5-70f0-42bf-a215-ad31bbbfd54f] Running
	I1213 10:26:25.789724  357320 system_pods.go:61] "metrics-server-85b7d694d7-h5rdh" [d30aae3c-2ad4-4e72-85a3-0ee845487f8e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 10:26:25.789747  357320 system_pods.go:61] "nvidia-device-plugin-daemonset-8blxf" [3201a02e-fa9a-46d7-9d5d-b5a1c793e01a] Pending
	I1213 10:26:25.789771  357320 system_pods.go:61] "registry-6b586f9694-w4p9x" [faf524b7-f1b3-484a-941d-99c4e0ea1742] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 10:26:25.789815  357320 system_pods.go:61] "registry-creds-764b6fb674-sgjj5" [2e8c14e5-85f0-48f5-95ae-fe51d21ead63] Pending
	I1213 10:26:25.789838  357320 system_pods.go:61] "registry-proxy-rd2tq" [509e84c1-a9e8-47b2-87dc-ce6324a1acdd] Pending
	I1213 10:26:25.789889  357320 system_pods.go:61] "snapshot-controller-7d9fbc56b8-86m6m" [8865a54f-056c-465c-8eb8-be37cc2ffbf1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:26:25.789914  357320 system_pods.go:61] "snapshot-controller-7d9fbc56b8-cf7bx" [e6bdb444-916c-47d2-afc0-8fe2cc811cb9] Pending
	I1213 10:26:25.789934  357320 system_pods.go:61] "storage-provisioner" [7a0ef0b4-dd06-4d44-8931-5ebb6dcc2276] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:26:25.789967  357320 system_pods.go:74] duration metric: took 59.646683ms to wait for pod list to return data ...
	I1213 10:26:25.789994  357320 default_sa.go:34] waiting for default service account to be created ...
	I1213 10:26:25.805909  357320 default_sa.go:45] found service account: "default"
	I1213 10:26:25.805986  357320 default_sa.go:55] duration metric: took 15.970933ms for default service account to be created ...
	I1213 10:26:25.806012  357320 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 10:26:25.848445  357320 system_pods.go:86] 19 kube-system pods found
	I1213 10:26:25.848535  357320 system_pods.go:89] "coredns-66bc5c9577-2h2qj" [d708bfcb-b562-4258-8a02-3496434b9d0f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:26:25.848560  357320 system_pods.go:89] "csi-hostpath-attacher-0" [6945d76f-4774-4e6a-bfe4-9967102c44ae] Pending
	I1213 10:26:25.848603  357320 system_pods.go:89] "csi-hostpath-resizer-0" [b5d0755c-9a98-4a4b-85d6-cde9012ab8d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 10:26:25.848667  357320 system_pods.go:89] "csi-hostpathplugin-j4gkx" [dc459b50-b68a-4fb1-a005-511c1f6644de] Pending
	I1213 10:26:25.848775  357320 system_pods.go:89] "etcd-addons-543946" [4cfdbfb8-1315-4335-a06b-7c55934ebfdd] Running
	I1213 10:26:25.848802  357320 system_pods.go:89] "kindnet-rjdb7" [fa5b3d77-68f9-4360-99ec-936116cfd80b] Running
	I1213 10:26:25.848821  357320 system_pods.go:89] "kube-apiserver-addons-543946" [a3c23361-35df-48b8-a5c2-b2a860c09121] Running
	I1213 10:26:25.848843  357320 system_pods.go:89] "kube-controller-manager-addons-543946" [029b6027-bd2d-4bc1-a36a-a61f7fdc09db] Running
	I1213 10:26:25.848884  357320 system_pods.go:89] "kube-ingress-dns-minikube" [f7b1db38-bdd3-4eec-ac48-c1307ffa281d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 10:26:25.848908  357320 system_pods.go:89] "kube-proxy-cmcs4" [191353c6-fc9a-4820-a0b4-3f621cd4b35b] Running
	I1213 10:26:25.848993  357320 system_pods.go:89] "kube-scheduler-addons-543946" [efaf06b5-70f0-42bf-a215-ad31bbbfd54f] Running
	I1213 10:26:25.849024  357320 system_pods.go:89] "metrics-server-85b7d694d7-h5rdh" [d30aae3c-2ad4-4e72-85a3-0ee845487f8e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 10:26:25.849045  357320 system_pods.go:89] "nvidia-device-plugin-daemonset-8blxf" [3201a02e-fa9a-46d7-9d5d-b5a1c793e01a] Pending
	I1213 10:26:25.849068  357320 system_pods.go:89] "registry-6b586f9694-w4p9x" [faf524b7-f1b3-484a-941d-99c4e0ea1742] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 10:26:25.849103  357320 system_pods.go:89] "registry-creds-764b6fb674-sgjj5" [2e8c14e5-85f0-48f5-95ae-fe51d21ead63] Pending
	I1213 10:26:25.849212  357320 system_pods.go:89] "registry-proxy-rd2tq" [509e84c1-a9e8-47b2-87dc-ce6324a1acdd] Pending
	I1213 10:26:25.849241  357320 system_pods.go:89] "snapshot-controller-7d9fbc56b8-86m6m" [8865a54f-056c-465c-8eb8-be37cc2ffbf1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:26:25.849263  357320 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cf7bx" [e6bdb444-916c-47d2-afc0-8fe2cc811cb9] Pending
	I1213 10:26:25.849291  357320 system_pods.go:89] "storage-provisioner" [7a0ef0b4-dd06-4d44-8931-5ebb6dcc2276] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:26:25.849334  357320 retry.go:31] will retry after 231.460689ms: missing components: kube-dns
	I1213 10:26:25.982707  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:26.064949  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:26.098503  357320 system_pods.go:86] 19 kube-system pods found
	I1213 10:26:26.098595  357320 system_pods.go:89] "coredns-66bc5c9577-2h2qj" [d708bfcb-b562-4258-8a02-3496434b9d0f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:26:26.098620  357320 system_pods.go:89] "csi-hostpath-attacher-0" [6945d76f-4774-4e6a-bfe4-9967102c44ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 10:26:26.098663  357320 system_pods.go:89] "csi-hostpath-resizer-0" [b5d0755c-9a98-4a4b-85d6-cde9012ab8d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 10:26:26.098689  357320 system_pods.go:89] "csi-hostpathplugin-j4gkx" [dc459b50-b68a-4fb1-a005-511c1f6644de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 10:26:26.098710  357320 system_pods.go:89] "etcd-addons-543946" [4cfdbfb8-1315-4335-a06b-7c55934ebfdd] Running
	I1213 10:26:26.098749  357320 system_pods.go:89] "kindnet-rjdb7" [fa5b3d77-68f9-4360-99ec-936116cfd80b] Running
	I1213 10:26:26.098774  357320 system_pods.go:89] "kube-apiserver-addons-543946" [a3c23361-35df-48b8-a5c2-b2a860c09121] Running
	I1213 10:26:26.098795  357320 system_pods.go:89] "kube-controller-manager-addons-543946" [029b6027-bd2d-4bc1-a36a-a61f7fdc09db] Running
	I1213 10:26:26.098835  357320 system_pods.go:89] "kube-ingress-dns-minikube" [f7b1db38-bdd3-4eec-ac48-c1307ffa281d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 10:26:26.098871  357320 system_pods.go:89] "kube-proxy-cmcs4" [191353c6-fc9a-4820-a0b4-3f621cd4b35b] Running
	I1213 10:26:26.098893  357320 system_pods.go:89] "kube-scheduler-addons-543946" [efaf06b5-70f0-42bf-a215-ad31bbbfd54f] Running
	I1213 10:26:26.098931  357320 system_pods.go:89] "metrics-server-85b7d694d7-h5rdh" [d30aae3c-2ad4-4e72-85a3-0ee845487f8e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 10:26:26.098959  357320 system_pods.go:89] "nvidia-device-plugin-daemonset-8blxf" [3201a02e-fa9a-46d7-9d5d-b5a1c793e01a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 10:26:26.098996  357320 system_pods.go:89] "registry-6b586f9694-w4p9x" [faf524b7-f1b3-484a-941d-99c4e0ea1742] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 10:26:26.099024  357320 system_pods.go:89] "registry-creds-764b6fb674-sgjj5" [2e8c14e5-85f0-48f5-95ae-fe51d21ead63] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 10:26:26.099048  357320 system_pods.go:89] "registry-proxy-rd2tq" [509e84c1-a9e8-47b2-87dc-ce6324a1acdd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 10:26:26.099083  357320 system_pods.go:89] "snapshot-controller-7d9fbc56b8-86m6m" [8865a54f-056c-465c-8eb8-be37cc2ffbf1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:26:26.099114  357320 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cf7bx" [e6bdb444-916c-47d2-afc0-8fe2cc811cb9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:26:26.099136  357320 system_pods.go:89] "storage-provisioner" [7a0ef0b4-dd06-4d44-8931-5ebb6dcc2276] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:26:26.099181  357320 retry.go:31] will retry after 299.980132ms: missing components: kube-dns
	I1213 10:26:26.130032  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:26.130376  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:26.406620  357320 system_pods.go:86] 19 kube-system pods found
	I1213 10:26:26.406706  357320 system_pods.go:89] "coredns-66bc5c9577-2h2qj" [d708bfcb-b562-4258-8a02-3496434b9d0f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 10:26:26.406731  357320 system_pods.go:89] "csi-hostpath-attacher-0" [6945d76f-4774-4e6a-bfe4-9967102c44ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 10:26:26.406773  357320 system_pods.go:89] "csi-hostpath-resizer-0" [b5d0755c-9a98-4a4b-85d6-cde9012ab8d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 10:26:26.406801  357320 system_pods.go:89] "csi-hostpathplugin-j4gkx" [dc459b50-b68a-4fb1-a005-511c1f6644de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 10:26:26.406824  357320 system_pods.go:89] "etcd-addons-543946" [4cfdbfb8-1315-4335-a06b-7c55934ebfdd] Running
	I1213 10:26:26.406861  357320 system_pods.go:89] "kindnet-rjdb7" [fa5b3d77-68f9-4360-99ec-936116cfd80b] Running
	I1213 10:26:26.406888  357320 system_pods.go:89] "kube-apiserver-addons-543946" [a3c23361-35df-48b8-a5c2-b2a860c09121] Running
	I1213 10:26:26.406908  357320 system_pods.go:89] "kube-controller-manager-addons-543946" [029b6027-bd2d-4bc1-a36a-a61f7fdc09db] Running
	I1213 10:26:26.406957  357320 system_pods.go:89] "kube-ingress-dns-minikube" [f7b1db38-bdd3-4eec-ac48-c1307ffa281d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 10:26:26.406982  357320 system_pods.go:89] "kube-proxy-cmcs4" [191353c6-fc9a-4820-a0b4-3f621cd4b35b] Running
	I1213 10:26:26.407005  357320 system_pods.go:89] "kube-scheduler-addons-543946" [efaf06b5-70f0-42bf-a215-ad31bbbfd54f] Running
	I1213 10:26:26.407039  357320 system_pods.go:89] "metrics-server-85b7d694d7-h5rdh" [d30aae3c-2ad4-4e72-85a3-0ee845487f8e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 10:26:26.407067  357320 system_pods.go:89] "nvidia-device-plugin-daemonset-8blxf" [3201a02e-fa9a-46d7-9d5d-b5a1c793e01a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 10:26:26.407088  357320 system_pods.go:89] "registry-6b586f9694-w4p9x" [faf524b7-f1b3-484a-941d-99c4e0ea1742] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 10:26:26.407126  357320 system_pods.go:89] "registry-creds-764b6fb674-sgjj5" [2e8c14e5-85f0-48f5-95ae-fe51d21ead63] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 10:26:26.407153  357320 system_pods.go:89] "registry-proxy-rd2tq" [509e84c1-a9e8-47b2-87dc-ce6324a1acdd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 10:26:26.407175  357320 system_pods.go:89] "snapshot-controller-7d9fbc56b8-86m6m" [8865a54f-056c-465c-8eb8-be37cc2ffbf1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:26:26.407214  357320 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cf7bx" [e6bdb444-916c-47d2-afc0-8fe2cc811cb9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:26:26.407240  357320 system_pods.go:89] "storage-provisioner" [7a0ef0b4-dd06-4d44-8931-5ebb6dcc2276] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 10:26:26.407270  357320 retry.go:31] will retry after 417.018213ms: missing components: kube-dns
	I1213 10:26:26.472259  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:26.563093  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:26.625066  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:26.625169  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:26.831079  357320 system_pods.go:86] 19 kube-system pods found
	I1213 10:26:26.831113  357320 system_pods.go:89] "coredns-66bc5c9577-2h2qj" [d708bfcb-b562-4258-8a02-3496434b9d0f] Running
	I1213 10:26:26.831124  357320 system_pods.go:89] "csi-hostpath-attacher-0" [6945d76f-4774-4e6a-bfe4-9967102c44ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 10:26:26.831131  357320 system_pods.go:89] "csi-hostpath-resizer-0" [b5d0755c-9a98-4a4b-85d6-cde9012ab8d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 10:26:26.831140  357320 system_pods.go:89] "csi-hostpathplugin-j4gkx" [dc459b50-b68a-4fb1-a005-511c1f6644de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 10:26:26.831146  357320 system_pods.go:89] "etcd-addons-543946" [4cfdbfb8-1315-4335-a06b-7c55934ebfdd] Running
	I1213 10:26:26.831151  357320 system_pods.go:89] "kindnet-rjdb7" [fa5b3d77-68f9-4360-99ec-936116cfd80b] Running
	I1213 10:26:26.831161  357320 system_pods.go:89] "kube-apiserver-addons-543946" [a3c23361-35df-48b8-a5c2-b2a860c09121] Running
	I1213 10:26:26.831166  357320 system_pods.go:89] "kube-controller-manager-addons-543946" [029b6027-bd2d-4bc1-a36a-a61f7fdc09db] Running
	I1213 10:26:26.831176  357320 system_pods.go:89] "kube-ingress-dns-minikube" [f7b1db38-bdd3-4eec-ac48-c1307ffa281d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 10:26:26.831181  357320 system_pods.go:89] "kube-proxy-cmcs4" [191353c6-fc9a-4820-a0b4-3f621cd4b35b] Running
	I1213 10:26:26.831191  357320 system_pods.go:89] "kube-scheduler-addons-543946" [efaf06b5-70f0-42bf-a215-ad31bbbfd54f] Running
	I1213 10:26:26.831198  357320 system_pods.go:89] "metrics-server-85b7d694d7-h5rdh" [d30aae3c-2ad4-4e72-85a3-0ee845487f8e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 10:26:26.831208  357320 system_pods.go:89] "nvidia-device-plugin-daemonset-8blxf" [3201a02e-fa9a-46d7-9d5d-b5a1c793e01a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 10:26:26.831214  357320 system_pods.go:89] "registry-6b586f9694-w4p9x" [faf524b7-f1b3-484a-941d-99c4e0ea1742] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 10:26:26.831222  357320 system_pods.go:89] "registry-creds-764b6fb674-sgjj5" [2e8c14e5-85f0-48f5-95ae-fe51d21ead63] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 10:26:26.831230  357320 system_pods.go:89] "registry-proxy-rd2tq" [509e84c1-a9e8-47b2-87dc-ce6324a1acdd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 10:26:26.831237  357320 system_pods.go:89] "snapshot-controller-7d9fbc56b8-86m6m" [8865a54f-056c-465c-8eb8-be37cc2ffbf1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:26:26.831243  357320 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cf7bx" [e6bdb444-916c-47d2-afc0-8fe2cc811cb9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 10:26:26.831248  357320 system_pods.go:89] "storage-provisioner" [7a0ef0b4-dd06-4d44-8931-5ebb6dcc2276] Running
	I1213 10:26:26.831259  357320 system_pods.go:126] duration metric: took 1.02522791s to wait for k8s-apps to be running ...
	I1213 10:26:26.831271  357320 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 10:26:26.831326  357320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:26:26.847578  357320 system_svc.go:56] duration metric: took 16.298986ms WaitForService to wait for kubelet
	I1213 10:26:26.847609  357320 kubeadm.go:587] duration metric: took 16.165040106s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:26:26.847630  357320 node_conditions.go:102] verifying NodePressure condition ...
	I1213 10:26:26.851586  357320 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 10:26:26.851671  357320 node_conditions.go:123] node cpu capacity is 2
	I1213 10:26:26.851701  357320 node_conditions.go:105] duration metric: took 4.065235ms to run NodePressure ...
	I1213 10:26:26.851743  357320 start.go:242] waiting for startup goroutines ...
	I1213 10:26:26.972082  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:27.063044  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:27.128232  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:27.128336  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:27.472039  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:27.563583  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:27.625254  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:27.625383  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:27.972214  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:28.063028  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:28.124427  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:28.124597  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:28.471579  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:28.562511  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:28.625065  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:28.625694  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:28.971433  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:29.063474  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:29.124905  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:29.125648  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:29.476949  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:29.576904  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:29.624356  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:29.624961  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:29.981669  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:30.074396  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:30.126479  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:30.126882  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:30.471741  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:30.562705  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:30.624071  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:30.626819  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:30.972210  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:31.063496  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:31.124103  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:31.127143  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:31.472192  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:31.563278  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:31.625724  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:31.625836  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:31.970808  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:32.062718  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:32.123483  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:32.125873  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:32.472378  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:32.562299  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:32.626218  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:32.626466  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:32.971925  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:33.062940  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:33.126354  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:33.126488  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:33.472404  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:33.562425  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:33.625435  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:33.625569  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:33.972541  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:34.062859  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:34.124985  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:34.126900  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:34.472387  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:34.572629  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:34.625405  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:34.625618  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:34.971281  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:35.062048  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:35.125832  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:35.126051  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:35.470924  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:35.562805  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:35.625227  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:35.625772  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:35.974146  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:36.063579  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:36.124298  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:36.126086  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:36.471941  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:36.563071  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:36.625639  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:36.625881  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:36.972162  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:37.062913  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:37.125199  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:37.126016  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:37.472773  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:37.563066  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:37.636806  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:37.637268  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:37.972471  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:38.064095  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:38.125771  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:38.126437  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:38.474012  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:38.564369  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:38.623814  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:38.626709  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:38.971598  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:39.062729  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:39.125003  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:39.126270  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:39.471399  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:39.562655  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:39.623954  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:39.624487  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:39.973954  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:40.063363  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:40.122964  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:40.124562  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:40.472938  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:40.564757  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:40.626251  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:40.626687  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:40.972275  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:41.075434  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:41.197717  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:41.197878  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:41.471576  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:41.562716  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:41.623621  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:41.624616  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:41.971305  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:42.062522  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:42.125102  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:42.125597  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:42.472432  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:42.562911  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:42.625033  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:42.626769  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:42.971464  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:43.062988  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:43.125821  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:43.126918  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:43.471856  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:43.563191  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:43.626157  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:43.626614  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:43.971981  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:44.063375  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:44.124591  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:44.124959  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:44.472224  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:44.572695  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:44.624107  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:44.625545  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:44.971248  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:45.063259  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:45.127866  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:45.128064  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:45.471720  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:45.562409  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:45.625381  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:45.625535  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:45.972810  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:46.063014  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:46.125987  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:46.126379  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:46.472348  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:46.562636  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:46.625755  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:46.626607  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:46.972439  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:47.064061  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:47.125339  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:47.125487  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:47.472319  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:47.562602  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:47.623143  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:47.624629  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:47.971224  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:48.062115  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:48.124595  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:48.124760  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:48.472555  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:48.562616  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:48.625775  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:48.626822  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:48.970908  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:49.063089  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:49.125141  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:49.125319  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:49.471813  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:49.563006  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:49.625047  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:49.625199  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:49.981978  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:50.065199  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:50.126240  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:50.126422  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:50.475459  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:50.576644  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:50.676839  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:50.677372  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:50.973790  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:51.063084  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:51.129145  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:51.129800  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:51.471999  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:51.563363  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:51.626048  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:51.626468  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:51.971861  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:52.062858  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:52.126305  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:52.126917  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:52.473669  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:52.563320  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:52.625941  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:52.626167  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:52.977318  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:53.062758  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:53.124117  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:53.124784  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:53.471068  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:53.563108  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:53.625776  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:53.626173  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:53.971927  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:54.063058  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:54.133121  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:54.133143  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:54.473103  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:54.574145  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:54.677646  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:54.678031  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:54.970954  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:55.062898  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:55.124672  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:55.124916  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:55.471094  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:55.563030  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:55.623724  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:55.624422  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:55.976604  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:56.063028  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:56.124511  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:56.125959  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:56.472865  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:56.562961  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:56.625816  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:56.626291  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:56.973527  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:57.062515  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:57.124771  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:57.126603  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:57.472723  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:57.562729  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:57.626061  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:57.626421  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:57.972224  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:58.063536  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:58.124557  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:58.126335  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:58.474211  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:58.563211  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:58.626121  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:58.626332  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:58.973309  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:59.063899  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:59.125900  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:59.126287  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:59.472331  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:26:59.562352  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:26:59.624523  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:26:59.625667  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:26:59.975780  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:00.074661  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:00.156620  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:00.157143  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:27:00.472352  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:00.562068  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:00.625671  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:00.625866  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:27:00.971757  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:01.062876  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:01.124366  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:27:01.125914  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:01.471566  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:01.562426  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:01.623873  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:27:01.625160  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:02.005131  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:02.063274  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:02.124463  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:02.124601  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:27:02.471058  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:02.563424  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:02.624529  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:27:02.625532  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:02.971924  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:03.063012  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:03.126220  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:03.126362  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:27:03.472380  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:03.562457  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:03.624830  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:27:03.625593  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:03.972310  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:04.063629  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:04.128380  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:04.128576  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:27:04.470837  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:04.567357  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:04.624696  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 10:27:04.624956  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:04.976956  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:05.074333  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:05.124931  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:05.125714  357320 kapi.go:107] duration metric: took 47.005334302s to wait for kubernetes.io/minikube-addons=registry ...
	I1213 10:27:05.472035  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:05.563162  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:05.625410  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:05.971974  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:06.063663  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:06.125156  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:06.472359  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:06.572417  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:06.624681  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:06.972661  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:07.062806  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:07.125043  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:07.471222  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:07.565876  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:07.625945  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:07.972041  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:08.063145  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:08.125905  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:08.471747  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:08.571891  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:08.631964  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:08.971428  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:09.063676  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:09.125111  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:09.472529  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:09.565787  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:09.626830  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:09.973357  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:10.062738  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:10.125102  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:10.471962  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:10.563539  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:10.624664  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:10.972511  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:11.064751  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:11.126169  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:11.474080  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:11.575832  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:11.624452  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:11.972169  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:12.063627  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 10:27:12.135018  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:12.471565  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:12.572501  357320 kapi.go:107] duration metric: took 51.013158445s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1213 10:27:12.575976  357320 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-543946 cluster.
	I1213 10:27:12.579630  357320 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1213 10:27:12.582867  357320 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1213 10:27:12.624662  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:12.971185  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:13.124369  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:13.475822  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:13.631601  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:13.972069  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:14.129382  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:14.472767  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:14.625884  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:14.971828  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:15.126120  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:15.472509  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:15.624174  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:15.972129  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:16.125634  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:16.471655  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:16.625351  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:16.972278  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:17.125019  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:17.471214  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:17.625316  357320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 10:27:17.971898  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:18.132928  357320 kapi.go:107] duration metric: took 1m0.011625722s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1213 10:27:18.471889  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:18.973182  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:19.548074  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:19.973877  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:20.472245  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:20.972127  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:21.472749  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:21.971906  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:22.471763  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:22.973183  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:23.471906  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:23.971655  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:24.471117  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:24.971997  357320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 10:27:25.472007  357320 kapi.go:107] duration metric: took 1m7.004084489s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1213 10:27:25.475073  357320 out.go:179] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner-rancher, inspektor-gadget, amd-gpu-device-plugin, registry-creds, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1213 10:27:25.477943  357320 addons.go:530] duration metric: took 1m14.794927144s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner-rancher inspektor-gadget amd-gpu-device-plugin registry-creds storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1213 10:27:25.478006  357320 start.go:247] waiting for cluster config update ...
	I1213 10:27:25.478029  357320 start.go:256] writing updated cluster config ...
	I1213 10:27:25.478357  357320 ssh_runner.go:195] Run: rm -f paused
	I1213 10:27:25.482980  357320 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 10:27:25.486505  357320 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2h2qj" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:25.491276  357320 pod_ready.go:94] pod "coredns-66bc5c9577-2h2qj" is "Ready"
	I1213 10:27:25.491304  357320 pod_ready.go:86] duration metric: took 4.775626ms for pod "coredns-66bc5c9577-2h2qj" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:25.493590  357320 pod_ready.go:83] waiting for pod "etcd-addons-543946" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:25.498029  357320 pod_ready.go:94] pod "etcd-addons-543946" is "Ready"
	I1213 10:27:25.498055  357320 pod_ready.go:86] duration metric: took 4.44187ms for pod "etcd-addons-543946" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:25.500498  357320 pod_ready.go:83] waiting for pod "kube-apiserver-addons-543946" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:25.505135  357320 pod_ready.go:94] pod "kube-apiserver-addons-543946" is "Ready"
	I1213 10:27:25.505211  357320 pod_ready.go:86] duration metric: took 4.67855ms for pod "kube-apiserver-addons-543946" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:25.508057  357320 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-543946" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:25.887091  357320 pod_ready.go:94] pod "kube-controller-manager-addons-543946" is "Ready"
	I1213 10:27:25.887178  357320 pod_ready.go:86] duration metric: took 379.092479ms for pod "kube-controller-manager-addons-543946" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:26.087702  357320 pod_ready.go:83] waiting for pod "kube-proxy-cmcs4" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:26.486824  357320 pod_ready.go:94] pod "kube-proxy-cmcs4" is "Ready"
	I1213 10:27:26.486850  357320 pod_ready.go:86] duration metric: took 399.118554ms for pod "kube-proxy-cmcs4" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:26.687397  357320 pod_ready.go:83] waiting for pod "kube-scheduler-addons-543946" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:27.087449  357320 pod_ready.go:94] pod "kube-scheduler-addons-543946" is "Ready"
	I1213 10:27:27.087480  357320 pod_ready.go:86] duration metric: took 400.054671ms for pod "kube-scheduler-addons-543946" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 10:27:27.087494  357320 pod_ready.go:40] duration metric: took 1.604478668s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 10:27:27.152045  357320 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1213 10:27:27.155268  357320 out.go:179] * Done! kubectl is now configured to use "addons-543946" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 10:27:24 addons-543946 crio[830]: time="2025-12-13T10:27:24.580680039Z" level=info msg="Created container 9773f1dabc6fdab993282d91bf08c5bfc6cdb97f43f4747d34fbeefb5a9e8428: kube-system/csi-hostpathplugin-j4gkx/csi-snapshotter" id=5b06c12c-b03c-4d8d-b4d9-f6b6a6dbc168 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 10:27:24 addons-543946 crio[830]: time="2025-12-13T10:27:24.581738841Z" level=info msg="Starting container: 9773f1dabc6fdab993282d91bf08c5bfc6cdb97f43f4747d34fbeefb5a9e8428" id=0a44eb62-ea83-4629-a9af-a324fd721681 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 10:27:24 addons-543946 crio[830]: time="2025-12-13T10:27:24.584385048Z" level=info msg="Started container" PID=4944 containerID=9773f1dabc6fdab993282d91bf08c5bfc6cdb97f43f4747d34fbeefb5a9e8428 description=kube-system/csi-hostpathplugin-j4gkx/csi-snapshotter id=0a44eb62-ea83-4629-a9af-a324fd721681 name=/runtime.v1.RuntimeService/StartContainer sandboxID=01495111ea3d872e6324ee576a80831683bc4c1c08bb702bf4f96118f46a90e5
	Dec 13 10:27:28 addons-543946 crio[830]: time="2025-12-13T10:27:28.141550552Z" level=info msg="Running pod sandbox: default/busybox/POD" id=92d6447f-9962-40b9-9ff6-7dde8f85fe82 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 10:27:28 addons-543946 crio[830]: time="2025-12-13T10:27:28.141637897Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 10:27:28 addons-543946 crio[830]: time="2025-12-13T10:27:28.148218729Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:93deca5b2751b43147d3c6c0105f7a837034ba8717b9afcc75c15f9709f29b3f UID:c4936020-682f-4f78-8ab7-70b9e9cd5ae0 NetNS:/var/run/netns/053e0925-5860-47e8-a9df-3f126265105a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40014eaf70}] Aliases:map[]}"
	Dec 13 10:27:28 addons-543946 crio[830]: time="2025-12-13T10:27:28.148267435Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 13 10:27:28 addons-543946 crio[830]: time="2025-12-13T10:27:28.159393884Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:93deca5b2751b43147d3c6c0105f7a837034ba8717b9afcc75c15f9709f29b3f UID:c4936020-682f-4f78-8ab7-70b9e9cd5ae0 NetNS:/var/run/netns/053e0925-5860-47e8-a9df-3f126265105a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40014eaf70}] Aliases:map[]}"
	Dec 13 10:27:28 addons-543946 crio[830]: time="2025-12-13T10:27:28.160027343Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 13 10:27:28 addons-543946 crio[830]: time="2025-12-13T10:27:28.164349163Z" level=info msg="Ran pod sandbox 93deca5b2751b43147d3c6c0105f7a837034ba8717b9afcc75c15f9709f29b3f with infra container: default/busybox/POD" id=92d6447f-9962-40b9-9ff6-7dde8f85fe82 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 10:27:28 addons-543946 crio[830]: time="2025-12-13T10:27:28.165627134Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5956efbd-7403-428c-8e3e-ef3ad7177c99 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:27:28 addons-543946 crio[830]: time="2025-12-13T10:27:28.16579466Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5956efbd-7403-428c-8e3e-ef3ad7177c99 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:27:28 addons-543946 crio[830]: time="2025-12-13T10:27:28.165861221Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=5956efbd-7403-428c-8e3e-ef3ad7177c99 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:27:28 addons-543946 crio[830]: time="2025-12-13T10:27:28.167774226Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8a738e45-5fbb-4f8e-af19-f3f35b4c2970 name=/runtime.v1.ImageService/PullImage
	Dec 13 10:27:28 addons-543946 crio[830]: time="2025-12-13T10:27:28.169295778Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 13 10:27:30 addons-543946 crio[830]: time="2025-12-13T10:27:30.148655667Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=8a738e45-5fbb-4f8e-af19-f3f35b4c2970 name=/runtime.v1.ImageService/PullImage
	Dec 13 10:27:30 addons-543946 crio[830]: time="2025-12-13T10:27:30.15183039Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2f502cb9-949e-4dba-b8d6-f90922b1ad95 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:27:30 addons-543946 crio[830]: time="2025-12-13T10:27:30.153951135Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9eee6c19-9909-4f79-9739-3151c9dc7129 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:27:30 addons-543946 crio[830]: time="2025-12-13T10:27:30.159898105Z" level=info msg="Creating container: default/busybox/busybox" id=fc4e6d80-cd9f-4f5b-9f40-2296f89bc36f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 10:27:30 addons-543946 crio[830]: time="2025-12-13T10:27:30.160049926Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 10:27:30 addons-543946 crio[830]: time="2025-12-13T10:27:30.174407279Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 10:27:30 addons-543946 crio[830]: time="2025-12-13T10:27:30.17500562Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 10:27:30 addons-543946 crio[830]: time="2025-12-13T10:27:30.205079108Z" level=info msg="Created container a131b0e592dd6a5fcbb0f0ec4f679e52d9028a7c65a92eab3eeed5ee0fa821a1: default/busybox/busybox" id=fc4e6d80-cd9f-4f5b-9f40-2296f89bc36f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 10:27:30 addons-543946 crio[830]: time="2025-12-13T10:27:30.206519288Z" level=info msg="Starting container: a131b0e592dd6a5fcbb0f0ec4f679e52d9028a7c65a92eab3eeed5ee0fa821a1" id=a74655f8-6300-4773-b932-4fc4565120f3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 10:27:30 addons-543946 crio[830]: time="2025-12-13T10:27:30.210519103Z" level=info msg="Started container" PID=5032 containerID=a131b0e592dd6a5fcbb0f0ec4f679e52d9028a7c65a92eab3eeed5ee0fa821a1 description=default/busybox/busybox id=a74655f8-6300-4773-b932-4fc4565120f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=93deca5b2751b43147d3c6c0105f7a837034ba8717b9afcc75c15f9709f29b3f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	a131b0e592dd6       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          10 seconds ago       Running             busybox                                  0                   93deca5b2751b       busybox                                     default
	9773f1dabc6fd       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          15 seconds ago       Running             csi-snapshotter                          0                   01495111ea3d8       csi-hostpathplugin-j4gkx                    kube-system
	ad42e673ec298       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          17 seconds ago       Running             csi-provisioner                          0                   01495111ea3d8       csi-hostpathplugin-j4gkx                    kube-system
	91959e0b37017       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            18 seconds ago       Running             liveness-probe                           0                   01495111ea3d8       csi-hostpathplugin-j4gkx                    kube-system
	ade5c570c4dbe       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           19 seconds ago       Running             hostpath                                 0                   01495111ea3d8       csi-hostpathplugin-j4gkx                    kube-system
	67f7b897bbe36       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             19 seconds ago       Exited              patch                                    3                   e27910df99afe       ingress-nginx-admission-patch-qvvht         ingress-nginx
	7710a35bda17f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                21 seconds ago       Running             node-driver-registrar                    0                   01495111ea3d8       csi-hostpathplugin-j4gkx                    kube-system
	04dab4e403d9a       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             22 seconds ago       Running             controller                               0                   a88a6a078e4de       ingress-nginx-controller-85d4c799dd-pdrq4   ingress-nginx
	aa8fac53ee008       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             28 seconds ago       Exited              patch                                    3                   05e2801dcb896       gcp-auth-certs-patch-jx86w                  gcp-auth
	729dc85a7df2d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 28 seconds ago       Running             gcp-auth                                 0                   dfd0b66e71367       gcp-auth-78565c9fb4-2rxfg                   gcp-auth
	8dc3964679adf       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            32 seconds ago       Running             gadget                                   0                   592205e009044       gadget-lqcbm                                gadget
	2cc901f4d3fb0       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              35 seconds ago       Running             registry-proxy                           0                   892674206af28       registry-proxy-rd2tq                        kube-system
	ead471b4c6339       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   38 seconds ago       Running             csi-external-health-monitor-controller   0                   01495111ea3d8       csi-hostpathplugin-j4gkx                    kube-system
	6df4509cdc61a       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              40 seconds ago       Running             yakd                                     0                   3691fbab0a6c9       yakd-dashboard-5ff678cb9-gjksm              yakd-dashboard
	cd17aa42a109a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   44 seconds ago       Exited              create                                   0                   b1573cd84e28a       ingress-nginx-admission-create-5dcld        ingress-nginx
	59d844a8a4aed       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     44 seconds ago       Running             nvidia-device-plugin-ctr                 0                   7f20b7157a6b1       nvidia-device-plugin-daemonset-8blxf        kube-system
	2046ccbd3de83       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   49 seconds ago       Exited              create                                   0                   b9d65a345fa01       gcp-auth-certs-create-m82pp                 gcp-auth
	0bcd4d507bd4a       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      50 seconds ago       Running             volume-snapshot-controller               0                   2fbaec37eb8e2       snapshot-controller-7d9fbc56b8-86m6m        kube-system
	9df3579774fb7       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        50 seconds ago       Running             metrics-server                           0                   e650d144045a4       metrics-server-85b7d694d7-h5rdh             kube-system
	7e758bd7d4de4       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             52 seconds ago       Running             local-path-provisioner                   0                   69fe6e0cdd6f1       local-path-provisioner-648f6765c9-569sb     local-path-storage
	f8f4b2d0d0ca0       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             53 seconds ago       Running             csi-attacher                             0                   54a608455bb53       csi-hostpath-attacher-0                     kube-system
	ea7feaaedcdea       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               54 seconds ago       Running             cloud-spanner-emulator                   0                   b7f5bed676861       cloud-spanner-emulator-5bdddb765-8bzlp      default
	82ec4f0d27393       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      59 seconds ago       Running             volume-snapshot-controller               0                   8ee024ae3484e       snapshot-controller-7d9fbc56b8-cf7bx        kube-system
	46a1e5bc68671       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   170bd2c3b5b7b       csi-hostpath-resizer-0                      kube-system
	abd7fd4640572       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   63461495bf3c3       kube-ingress-dns-minikube                   kube-system
	ca06350334c82       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   4126a8b59b290       registry-6b586f9694-w4p9x                   kube-system
	d5c5cc43186b7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   f4c514b6a01ed       coredns-66bc5c9577-2h2qj                    kube-system
	cc0f178df84bb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   74c060477a7a7       storage-provisioner                         kube-system
	40451bec4cc26       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3                                           About a minute ago   Running             kindnet-cni                              0                   3d5fbe8de519f       kindnet-rjdb7                               kube-system
	ca309cac66452       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             About a minute ago   Running             kube-proxy                               0                   d9d99089c50e3       kube-proxy-cmcs4                            kube-system
	051e9f414ee6e       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             About a minute ago   Running             kube-apiserver                           0                   4778db1696aa5       kube-apiserver-addons-543946                kube-system
	cf448155f622a       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             About a minute ago   Running             etcd                                     0                   dc0a116289aa9       etcd-addons-543946                          kube-system
	76b7938d7fbe3       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             About a minute ago   Running             kube-controller-manager                  0                   bd558ee45801a       kube-controller-manager-addons-543946       kube-system
	fe380896f1e4d       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             About a minute ago   Running             kube-scheduler                           0                   bfd8a5a5bcb91       kube-scheduler-addons-543946                kube-system
	
	
	==> coredns [d5c5cc43186b72930aa32f3cf24a96d8bf357cebf0358db4413edf761499d0af] <==
	[INFO] 10.244.0.14:48083 - 40804 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000061908s
	[INFO] 10.244.0.14:48083 - 51467 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002392182s
	[INFO] 10.244.0.14:48083 - 7998 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00177451s
	[INFO] 10.244.0.14:48083 - 1424 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000102089s
	[INFO] 10.244.0.14:48083 - 23560 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000081674s
	[INFO] 10.244.0.14:45654 - 14408 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000159813s
	[INFO] 10.244.0.14:45654 - 14211 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000285517s
	[INFO] 10.244.0.14:41934 - 22925 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000137709s
	[INFO] 10.244.0.14:41934 - 22753 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000131669s
	[INFO] 10.244.0.14:53966 - 9160 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00011068s
	[INFO] 10.244.0.14:53966 - 8706 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099004s
	[INFO] 10.244.0.14:59941 - 37056 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001922811s
	[INFO] 10.244.0.14:59941 - 37245 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002346783s
	[INFO] 10.244.0.14:45730 - 41236 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000150418s
	[INFO] 10.244.0.14:45730 - 41023 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000192412s
	[INFO] 10.244.0.21:39173 - 41471 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000176511s
	[INFO] 10.244.0.21:54349 - 3361 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00008933s
	[INFO] 10.244.0.21:52404 - 19122 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000183157s
	[INFO] 10.244.0.21:41724 - 62930 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000093728s
	[INFO] 10.244.0.21:46606 - 37887 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00013957s
	[INFO] 10.244.0.21:50588 - 25114 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000084981s
	[INFO] 10.244.0.21:52007 - 62136 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004125912s
	[INFO] 10.244.0.21:45875 - 16367 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004665151s
	[INFO] 10.244.0.21:58335 - 43326 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001088496s
	[INFO] 10.244.0.21:55746 - 52027 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.008003313s
	
	
	==> describe nodes <==
	Name:               addons-543946
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-543946
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
	                    minikube.k8s.io/name=addons-543946
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T10_26_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-543946
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-543946"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 10:26:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-543946
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 10:27:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 10:27:17 +0000   Sat, 13 Dec 2025 10:25:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 10:27:17 +0000   Sat, 13 Dec 2025 10:25:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 10:27:17 +0000   Sat, 13 Dec 2025 10:25:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 10:27:17 +0000   Sat, 13 Dec 2025 10:26:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-543946
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                e1eb433e-9ee9-4616-8513-68821455500a
	  Boot ID:                    9bd24839-35d9-4392-a0e0-b2e0b9823eaa
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  default                     cloud-spanner-emulator-5bdddb765-8bzlp       0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  gadget                      gadget-lqcbm                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  gcp-auth                    gcp-auth-78565c9fb4-2rxfg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-pdrq4    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         82s
	  kube-system                 coredns-66bc5c9577-2h2qj                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     89s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 csi-hostpathplugin-j4gkx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 etcd-addons-543946                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         95s
	  kube-system                 kindnet-rjdb7                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      90s
	  kube-system                 kube-apiserver-addons-543946                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-controller-manager-addons-543946        200m (10%)    0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-proxy-cmcs4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-scheduler-addons-543946                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 metrics-server-85b7d694d7-h5rdh              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         83s
	  kube-system                 nvidia-device-plugin-daemonset-8blxf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 registry-6b586f9694-w4p9x                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 registry-creds-764b6fb674-sgjj5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 registry-proxy-rd2tq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 snapshot-controller-7d9fbc56b8-86m6m         0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 snapshot-controller-7d9fbc56b8-cf7bx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  local-path-storage          local-path-provisioner-648f6765c9-569sb      0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-gjksm               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 88s                  kube-proxy       
	  Warning  CgroupV1                 102s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  102s (x8 over 102s)  kubelet          Node addons-543946 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    102s (x8 over 102s)  kubelet          Node addons-543946 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     102s (x8 over 102s)  kubelet          Node addons-543946 status is now: NodeHasSufficientPID
	  Normal   Starting                 95s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 95s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  95s                  kubelet          Node addons-543946 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    95s                  kubelet          Node addons-543946 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     95s                  kubelet          Node addons-543946 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           91s                  node-controller  Node addons-543946 event: Registered Node addons-543946 in Controller
	  Normal   NodeReady                75s                  kubelet          Node addons-543946 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	[Dec13 10:24] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:25] overlayfs: idmapped layers are currently not supported
	[  +0.081323] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [cf448155f622a981b7e826d6176c5a79e1d0b75a3b353485a3e5065aa49ad951] <==
	{"level":"warn","ts":"2025-12-13T10:26:01.643376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.666500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.682630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.704910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.717615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.740022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.751569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.797967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.798635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.816005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.870248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.895624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:01.911762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:02.015596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48816","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T10:26:11.342784Z","caller":"traceutil/trace.go:172","msg":"trace[626553572] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"111.581256ms","start":"2025-12-13T10:26:11.231168Z","end":"2025-12-13T10:26:11.342749Z","steps":["trace[626553572] 'process raft request'  (duration: 99.173332ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T10:26:11.343951Z","caller":"traceutil/trace.go:172","msg":"trace[1998081431] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"112.323033ms","start":"2025-12-13T10:26:11.231610Z","end":"2025-12-13T10:26:11.343933Z","steps":["trace[1998081431] 'process raft request'  (duration: 98.865875ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T10:26:11.414708Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.332837ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-13T10:26:11.414780Z","caller":"traceutil/trace.go:172","msg":"trace[1040673509] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:376; }","duration":"109.417105ms","start":"2025-12-13T10:26:11.305350Z","end":"2025-12-13T10:26:11.414767Z","steps":["trace[1040673509] 'agreement among raft nodes before linearized reading'  (duration: 39.833179ms)","trace[1040673509] 'range keys from in-memory index tree'  (duration: 69.481402ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T10:26:11.420323Z","caller":"traceutil/trace.go:172","msg":"trace[648221834] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"114.35942ms","start":"2025-12-13T10:26:11.305939Z","end":"2025-12-13T10:26:11.420298Z","steps":["trace[648221834] 'process raft request'  (duration: 71.31576ms)","trace[648221834] 'compare'  (duration: 37.626656ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T10:26:18.604623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:18.622187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:39.876463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:39.886969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:39.915264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T10:26:39.932393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34364","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [729dc85a7df2d4c932f095d2c39d9d6da7bd7512f04db51ca120e0979dc276c1] <==
	2025/12/13 10:27:11 GCP Auth Webhook started!
	2025/12/13 10:27:27 Ready to marshal response ...
	2025/12/13 10:27:27 Ready to write response ...
	2025/12/13 10:27:27 Ready to marshal response ...
	2025/12/13 10:27:27 Ready to write response ...
	2025/12/13 10:27:27 Ready to marshal response ...
	2025/12/13 10:27:27 Ready to write response ...
	
	
	==> kernel <==
	 10:27:40 up  2:10,  0 user,  load average: 2.32, 2.15, 1.71
	Linux addons-543946 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [40451bec4cc2625dade53eb6c1f0778cc9665d75785787a901b2ca8fe63f61db] <==
	I1213 10:26:14.728076       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T10:26:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 10:26:14.936641       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 10:26:14.936661       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 10:26:14.936671       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 10:26:14.936825       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 10:26:15.219601       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 10:26:15.219637       1 metrics.go:72] Registering metrics
	I1213 10:26:15.219693       1 controller.go:711] "Syncing nftables rules"
	I1213 10:26:24.929358       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:26:24.929438       1 main.go:301] handling current node
	I1213 10:26:34.931630       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:26:34.931727       1 main.go:301] handling current node
	I1213 10:26:44.931588       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:26:44.931685       1 main.go:301] handling current node
	I1213 10:26:54.929766       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:26:54.929811       1 main.go:301] handling current node
	I1213 10:27:04.929427       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:27:04.929459       1 main.go:301] handling current node
	I1213 10:27:14.928762       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:27:14.932006       1 main.go:301] handling current node
	I1213 10:27:24.929590       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:27:24.929630       1 main.go:301] handling current node
	I1213 10:27:34.930489       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 10:27:34.930527       1 main.go:301] handling current node
	
	
	==> kube-apiserver [051e9f414ee6e9ab31fd97afb8184da1ef222b1c4f7dd9b0735f3e6282f04624] <==
	I1213 10:26:18.316169       1 controller.go:667] quota admission added evaluator for: statefulsets.apps
	I1213 10:26:18.424218       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.107.214.190"}
	W1213 10:26:18.604489       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1213 10:26:18.621460       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1213 10:26:21.429971       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.104.83.249"}
	W1213 10:26:25.253960       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.83.249:443: connect: connection refused
	E1213 10:26:25.254011       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.83.249:443: connect: connection refused" logger="UnhandledError"
	W1213 10:26:25.255054       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.83.249:443: connect: connection refused
	E1213 10:26:25.255132       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.83.249:443: connect: connection refused" logger="UnhandledError"
	W1213 10:26:25.333220       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.83.249:443: connect: connection refused
	E1213 10:26:25.333353       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.83.249:443: connect: connection refused" logger="UnhandledError"
	W1213 10:26:39.868343       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1213 10:26:39.886982       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1213 10:26:39.915157       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1213 10:26:39.930336       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1213 10:27:01.939922       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 10:27:01.939944       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.61.160:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.61.160:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.61.160:443: connect: connection refused" logger="UnhandledError"
	E1213 10:27:01.940099       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1213 10:27:01.940588       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.61.160:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.61.160:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.61.160:443: connect: connection refused" logger="UnhandledError"
	I1213 10:27:02.030999       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1213 10:27:38.108441       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43278: use of closed network connection
	E1213 10:27:38.363358       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43316: use of closed network connection
	
	
	==> kube-controller-manager [76b7938d7fbe387df34dbe103158724b60d8446770ffc2084ed0c6d9dcca5419] <==
	I1213 10:26:09.862553       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 10:26:09.875575       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 10:26:09.879857       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 10:26:09.881006       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1213 10:26:09.881074       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 10:26:09.881182       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 10:26:09.881120       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 10:26:09.881110       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 10:26:09.882497       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 10:26:09.882566       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 10:26:09.882523       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 10:26:09.883899       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 10:26:09.887296       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1213 10:26:09.887300       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1213 10:26:09.887457       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 10:26:09.888514       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 10:26:09.888518       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 10:26:29.876818       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1213 10:26:39.859346       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1213 10:26:39.859568       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1213 10:26:39.859629       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1213 10:26:39.899473       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1213 10:26:39.903245       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1213 10:26:39.960635       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 10:26:40.004400       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [ca309cac66452ff14d4cade2b7a47b20ec31fc85df9461959e22811849d21fec] <==
	I1213 10:26:11.713305       1 server_linux.go:53] "Using iptables proxy"
	I1213 10:26:11.853348       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 10:26:11.959284       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 10:26:11.959361       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1213 10:26:11.959467       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 10:26:12.028416       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 10:26:12.028473       1 server_linux.go:132] "Using iptables Proxier"
	I1213 10:26:12.034030       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 10:26:12.034321       1 server.go:527] "Version info" version="v1.34.2"
	I1213 10:26:12.034336       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 10:26:12.035921       1 config.go:200] "Starting service config controller"
	I1213 10:26:12.035932       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 10:26:12.035948       1 config.go:106] "Starting endpoint slice config controller"
	I1213 10:26:12.035952       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 10:26:12.035962       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 10:26:12.035966       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 10:26:12.036682       1 config.go:309] "Starting node config controller"
	I1213 10:26:12.036690       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 10:26:12.036696       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 10:26:12.138942       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 10:26:12.139167       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 10:26:12.139462       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [fe380896f1e4da12ca13e0f570ceba0fba5d8edef70e3de8c10f5159b3c36a8d] <==
	E1213 10:26:03.005841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 10:26:03.005940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 10:26:03.006038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 10:26:03.006140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 10:26:03.006525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 10:26:03.006624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 10:26:03.006640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 10:26:03.006255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 10:26:03.006734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 10:26:03.006788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 10:26:03.006832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 10:26:03.006692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 10:26:03.833182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 10:26:03.856848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 10:26:03.900610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 10:26:03.948507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 10:26:03.974632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 10:26:03.976131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 10:26:04.023541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 10:26:04.056648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 10:26:04.095769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 10:26:04.146600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 10:26:04.148756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 10:26:04.354827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1213 10:26:06.262601       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 10:27:07 addons-543946 kubelet[1284]: E1213 10:27:07.414631    1284 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"patch\" with CrashLoopBackOff: \"back-off 20s restarting failed container=patch pod=ingress-nginx-admission-patch-qvvht_ingress-nginx(189c08d0-f24f-4089-ad31-641ebce359af)\"" pod="ingress-nginx/ingress-nginx-admission-patch-qvvht" podUID="189c08d0-f24f-4089-ad31-641ebce359af"
	Dec 13 10:27:09 addons-543946 kubelet[1284]: I1213 10:27:09.059551    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-lqcbm" podStartSLOduration=20.970412676 podStartE2EDuration="52.059498022s" podCreationTimestamp="2025-12-13 10:26:17 +0000 UTC" firstStartedPulling="2025-12-13 10:26:37.095876167 +0000 UTC m=+31.809354268" lastFinishedPulling="2025-12-13 10:27:08.18496148 +0000 UTC m=+62.898439614" observedRunningTime="2025-12-13 10:27:09.057669629 +0000 UTC m=+63.771147738" watchObservedRunningTime="2025-12-13 10:27:09.059498022 +0000 UTC m=+63.772976140"
	Dec 13 10:27:11 addons-543946 kubelet[1284]: I1213 10:27:11.413524    1284 scope.go:117] "RemoveContainer" containerID="7d0aa1e9d6077f68e6b941b6b13e70a2fdbb6aedc3824bc937b238be1aad0fbc"
	Dec 13 10:27:12 addons-543946 kubelet[1284]: I1213 10:27:12.081373    1284 scope.go:117] "RemoveContainer" containerID="7d0aa1e9d6077f68e6b941b6b13e70a2fdbb6aedc3824bc937b238be1aad0fbc"
	Dec 13 10:27:12 addons-543946 kubelet[1284]: I1213 10:27:12.089186    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-2rxfg" podStartSLOduration=39.646706586 podStartE2EDuration="51.089168621s" podCreationTimestamp="2025-12-13 10:26:21 +0000 UTC" firstStartedPulling="2025-12-13 10:26:59.783159098 +0000 UTC m=+54.496637199" lastFinishedPulling="2025-12-13 10:27:11.225621134 +0000 UTC m=+65.939099234" observedRunningTime="2025-12-13 10:27:12.088215282 +0000 UTC m=+66.801693391" watchObservedRunningTime="2025-12-13 10:27:12.089168621 +0000 UTC m=+66.802646722"
	Dec 13 10:27:13 addons-543946 kubelet[1284]: I1213 10:27:13.383351    1284 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxpr6\" (UniqueName: \"kubernetes.io/projected/4aa216c9-85df-4716-8a7e-e75d895d591a-kube-api-access-qxpr6\") pod \"4aa216c9-85df-4716-8a7e-e75d895d591a\" (UID: \"4aa216c9-85df-4716-8a7e-e75d895d591a\") "
	Dec 13 10:27:13 addons-543946 kubelet[1284]: I1213 10:27:13.390814    1284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aa216c9-85df-4716-8a7e-e75d895d591a-kube-api-access-qxpr6" (OuterVolumeSpecName: "kube-api-access-qxpr6") pod "4aa216c9-85df-4716-8a7e-e75d895d591a" (UID: "4aa216c9-85df-4716-8a7e-e75d895d591a"). InnerVolumeSpecName "kube-api-access-qxpr6". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 13 10:27:13 addons-543946 kubelet[1284]: I1213 10:27:13.485250    1284 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qxpr6\" (UniqueName: \"kubernetes.io/projected/4aa216c9-85df-4716-8a7e-e75d895d591a-kube-api-access-qxpr6\") on node \"addons-543946\" DevicePath \"\""
	Dec 13 10:27:14 addons-543946 kubelet[1284]: I1213 10:27:14.097894    1284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05e2801dcb896a1ed7004b5d54c7d2cdca700df303956cd2874001d5f2e5e2c9"
	Dec 13 10:27:20 addons-543946 kubelet[1284]: I1213 10:27:20.413527    1284 scope.go:117] "RemoveContainer" containerID="91b2cbaf6160231ab43852a30dd5e6c539699f29113ec309f46aed5f791b0258"
	Dec 13 10:27:21 addons-543946 kubelet[1284]: I1213 10:27:21.132508    1284 scope.go:117] "RemoveContainer" containerID="91b2cbaf6160231ab43852a30dd5e6c539699f29113ec309f46aed5f791b0258"
	Dec 13 10:27:21 addons-543946 kubelet[1284]: I1213 10:27:21.149457    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-pdrq4" podStartSLOduration=44.917512464 podStartE2EDuration="1m3.149440325s" podCreationTimestamp="2025-12-13 10:26:18 +0000 UTC" firstStartedPulling="2025-12-13 10:26:59.783218832 +0000 UTC m=+54.496696933" lastFinishedPulling="2025-12-13 10:27:18.015146685 +0000 UTC m=+72.728624794" observedRunningTime="2025-12-13 10:27:18.151408103 +0000 UTC m=+72.864886220" watchObservedRunningTime="2025-12-13 10:27:21.149440325 +0000 UTC m=+75.862918434"
	Dec 13 10:27:21 addons-543946 kubelet[1284]: I1213 10:27:21.599246    1284 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 13 10:27:21 addons-543946 kubelet[1284]: I1213 10:27:21.599300    1284 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 13 10:27:22 addons-543946 kubelet[1284]: I1213 10:27:22.275785    1284 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xfqx\" (UniqueName: \"kubernetes.io/projected/189c08d0-f24f-4089-ad31-641ebce359af-kube-api-access-2xfqx\") pod \"189c08d0-f24f-4089-ad31-641ebce359af\" (UID: \"189c08d0-f24f-4089-ad31-641ebce359af\") "
	Dec 13 10:27:22 addons-543946 kubelet[1284]: I1213 10:27:22.280191    1284 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/189c08d0-f24f-4089-ad31-641ebce359af-kube-api-access-2xfqx" (OuterVolumeSpecName: "kube-api-access-2xfqx") pod "189c08d0-f24f-4089-ad31-641ebce359af" (UID: "189c08d0-f24f-4089-ad31-641ebce359af"). InnerVolumeSpecName "kube-api-access-2xfqx". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 13 10:27:22 addons-543946 kubelet[1284]: I1213 10:27:22.376430    1284 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2xfqx\" (UniqueName: \"kubernetes.io/projected/189c08d0-f24f-4089-ad31-641ebce359af-kube-api-access-2xfqx\") on node \"addons-543946\" DevicePath \"\""
	Dec 13 10:27:23 addons-543946 kubelet[1284]: I1213 10:27:23.177775    1284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e27910df99afe9ad377b7bda58bbd20d7dd4d9ddaf5b22fec2cf17e9ed9cf3c6"
	Dec 13 10:27:23 addons-543946 kubelet[1284]: I1213 10:27:23.416393    1284 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72f4e841-9f65-45a9-92d3-378464aef5dd" path="/var/lib/kubelet/pods/72f4e841-9f65-45a9-92d3-378464aef5dd/volumes"
	Dec 13 10:27:27 addons-543946 kubelet[1284]: I1213 10:27:27.829808    1284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-j4gkx" podStartSLOduration=4.645815585 podStartE2EDuration="1m2.829773188s" podCreationTimestamp="2025-12-13 10:26:25 +0000 UTC" firstStartedPulling="2025-12-13 10:26:26.356057848 +0000 UTC m=+21.069535949" lastFinishedPulling="2025-12-13 10:27:24.540015451 +0000 UTC m=+79.253493552" observedRunningTime="2025-12-13 10:27:25.208496436 +0000 UTC m=+79.921974586" watchObservedRunningTime="2025-12-13 10:27:27.829773188 +0000 UTC m=+82.543251305"
	Dec 13 10:27:27 addons-543946 kubelet[1284]: I1213 10:27:27.932594    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lb8p\" (UniqueName: \"kubernetes.io/projected/c4936020-682f-4f78-8ab7-70b9e9cd5ae0-kube-api-access-5lb8p\") pod \"busybox\" (UID: \"c4936020-682f-4f78-8ab7-70b9e9cd5ae0\") " pod="default/busybox"
	Dec 13 10:27:27 addons-543946 kubelet[1284]: I1213 10:27:27.932909    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c4936020-682f-4f78-8ab7-70b9e9cd5ae0-gcp-creds\") pod \"busybox\" (UID: \"c4936020-682f-4f78-8ab7-70b9e9cd5ae0\") " pod="default/busybox"
	Dec 13 10:27:28 addons-543946 kubelet[1284]: W1213 10:27:28.162220    1284 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/771f4b2573d1af3192d178e32d39d9fa5a4476bb767a301f6f5b8bfb5a73f1ef/crio-93deca5b2751b43147d3c6c0105f7a837034ba8717b9afcc75c15f9709f29b3f WatchSource:0}: Error finding container 93deca5b2751b43147d3c6c0105f7a837034ba8717b9afcc75c15f9709f29b3f: Status 404 returned error can't find the container with id 93deca5b2751b43147d3c6c0105f7a837034ba8717b9afcc75c15f9709f29b3f
	Dec 13 10:27:29 addons-543946 kubelet[1284]: E1213 10:27:29.244605    1284 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 13 10:27:29 addons-543946 kubelet[1284]: E1213 10:27:29.244710    1284 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8c14e5-85f0-48f5-95ae-fe51d21ead63-gcr-creds podName:2e8c14e5-85f0-48f5-95ae-fe51d21ead63 nodeName:}" failed. No retries permitted until 2025-12-13 10:28:33.244691741 +0000 UTC m=+147.958169842 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/2e8c14e5-85f0-48f5-95ae-fe51d21ead63-gcr-creds") pod "registry-creds-764b6fb674-sgjj5" (UID: "2e8c14e5-85f0-48f5-95ae-fe51d21ead63") : secret "registry-creds-gcr" not found
	
	
	==> storage-provisioner [cc0f178df84bbe390a441a840f219d69e66c7fd3620de6752d3ee094c40cdd59] <==
	W1213 10:27:16.554640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:18.558163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:18.564750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:20.568658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:20.576938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:22.581373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:22.591771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:24.598151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:24.604721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:26.608193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:26.613088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:28.616253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:28.620661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:30.623616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:30.628028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:32.630843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:32.635772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:34.638510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:34.642836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:36.646064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:36.650284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:38.653469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:38.661216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:40.664843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 10:27:40.669963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-543946 -n addons-543946
helpers_test.go:270: (dbg) Run:  kubectl --context addons-543946 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: gcp-auth-certs-patch-jx86w ingress-nginx-admission-create-5dcld ingress-nginx-admission-patch-qvvht registry-creds-764b6fb674-sgjj5
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-543946 describe pod gcp-auth-certs-patch-jx86w ingress-nginx-admission-create-5dcld ingress-nginx-admission-patch-qvvht registry-creds-764b6fb674-sgjj5
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-543946 describe pod gcp-auth-certs-patch-jx86w ingress-nginx-admission-create-5dcld ingress-nginx-admission-patch-qvvht registry-creds-764b6fb674-sgjj5: exit status 1 (83.065941ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-jx86w" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-5dcld" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-qvvht" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-sgjj5" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-543946 describe pod gcp-auth-certs-patch-jx86w ingress-nginx-admission-create-5dcld ingress-nginx-admission-patch-qvvht registry-creds-764b6fb674-sgjj5: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-543946 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-543946 addons disable headlamp --alsologtostderr -v=1: exit status 11 (269.77173ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:27:41.715777  363969 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:27:41.716610  363969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:41.716650  363969 out.go:374] Setting ErrFile to fd 2...
	I1213 10:27:41.716675  363969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:41.716968  363969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:27:41.717325  363969 mustload.go:66] Loading cluster: addons-543946
	I1213 10:27:41.717772  363969 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:27:41.717819  363969 addons.go:622] checking whether the cluster is paused
	I1213 10:27:41.717958  363969 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:27:41.717999  363969 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:27:41.718551  363969 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:27:41.737077  363969 ssh_runner.go:195] Run: systemctl --version
	I1213 10:27:41.737130  363969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:27:41.756595  363969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:27:41.862260  363969 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:27:41.862354  363969 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:27:41.892853  363969 cri.go:89] found id: "9773f1dabc6fdab993282d91bf08c5bfc6cdb97f43f4747d34fbeefb5a9e8428"
	I1213 10:27:41.892874  363969 cri.go:89] found id: "ad42e673ec298eaaa7db2c08f5f8920f71246374f8e3379263c303f8bec7be5f"
	I1213 10:27:41.892878  363969 cri.go:89] found id: "91959e0b370171d408bed0fa52d4779da678474a77f884e8f856aaa174adc963"
	I1213 10:27:41.892882  363969 cri.go:89] found id: "ade5c570c4dbe1b93fd2561bd1346b042babb17e2cf39dfd555bd5fd97e8b622"
	I1213 10:27:41.892886  363969 cri.go:89] found id: "7710a35bda17f7d94abb6d5449fdca661d858863ec7aaa9df850aa1ff0c8345a"
	I1213 10:27:41.892889  363969 cri.go:89] found id: "2cc901f4d3fb002e6510963d1c14958538efb9f5f9655d576a398653295bab78"
	I1213 10:27:41.892892  363969 cri.go:89] found id: "ead471b4c6339a21f7d4642f7382e3e17c6bc67840d5597c9f1ba7d03a90ad51"
	I1213 10:27:41.892895  363969 cri.go:89] found id: "59d844a8a4aeddcbc54c84b89e2b13932aafef3528c0c7ed2fe1d6977efc0da4"
	I1213 10:27:41.892898  363969 cri.go:89] found id: "0bcd4d507bd4ada1761081111aa585e0302721ff2aa31a88b8a4ed23ef769c46"
	I1213 10:27:41.892911  363969 cri.go:89] found id: "9df3579774fb7e75da64c468015d647c1c846c2c3a3661e9cad4e7625c077819"
	I1213 10:27:41.892915  363969 cri.go:89] found id: "f8f4b2d0d0ca01cb0298f234ed98f38bb35db421dd1ee0ecfa35f42af8a048ea"
	I1213 10:27:41.892918  363969 cri.go:89] found id: "82ec4f0d273933dd1ab10c4541bc8ece9fcf638d0fadf7561a9a044e0d84b3e3"
	I1213 10:27:41.892921  363969 cri.go:89] found id: "46a1e5bc6867179fb782aaa1b961e54bb568eba367df5af5c2b1a32cc3432bcf"
	I1213 10:27:41.892924  363969 cri.go:89] found id: "abd7fd4640572a8d631c9a4b53b0b54dbb8e15303b0aad7c22abe4c2fd31d2f9"
	I1213 10:27:41.892928  363969 cri.go:89] found id: "ca06350334c8245e57a53c1956fea31819d2cec020bfc2d72fdf601430141c8e"
	I1213 10:27:41.892933  363969 cri.go:89] found id: "d5c5cc43186b72930aa32f3cf24a96d8bf357cebf0358db4413edf761499d0af"
	I1213 10:27:41.892941  363969 cri.go:89] found id: "cc0f178df84bbe390a441a840f219d69e66c7fd3620de6752d3ee094c40cdd59"
	I1213 10:27:41.892945  363969 cri.go:89] found id: "40451bec4cc2625dade53eb6c1f0778cc9665d75785787a901b2ca8fe63f61db"
	I1213 10:27:41.892948  363969 cri.go:89] found id: "ca309cac66452ff14d4cade2b7a47b20ec31fc85df9461959e22811849d21fec"
	I1213 10:27:41.892951  363969 cri.go:89] found id: "051e9f414ee6e9ab31fd97afb8184da1ef222b1c4f7dd9b0735f3e6282f04624"
	I1213 10:27:41.892955  363969 cri.go:89] found id: "cf448155f622a981b7e826d6176c5a79e1d0b75a3b353485a3e5065aa49ad951"
	I1213 10:27:41.892958  363969 cri.go:89] found id: "76b7938d7fbe387df34dbe103158724b60d8446770ffc2084ed0c6d9dcca5419"
	I1213 10:27:41.892961  363969 cri.go:89] found id: "fe380896f1e4da12ca13e0f570ceba0fba5d8edef70e3de8c10f5159b3c36a8d"
	I1213 10:27:41.892965  363969 cri.go:89] found id: ""
	I1213 10:27:41.893013  363969 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:27:41.913491  363969 out.go:203] 
	W1213 10:27:41.916554  363969 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:27:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:27:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:27:41.916579  363969 out.go:285] * 
	* 
	W1213 10:27:41.922253  363969 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:27:41.925103  363969 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-543946 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.16s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-8bzlp" [29b4af0d-b1fe-45c9-b3e8-c9fe1ee1a8e5] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003327842s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-543946 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-543946 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (288.50045ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:28:00.899903  364467 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:28:00.900728  364467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:28:00.900767  364467 out.go:374] Setting ErrFile to fd 2...
	I1213 10:28:00.900791  364467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:28:00.901067  364467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:28:00.901398  364467 mustload.go:66] Loading cluster: addons-543946
	I1213 10:28:00.901866  364467 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:28:00.901911  364467 addons.go:622] checking whether the cluster is paused
	I1213 10:28:00.902083  364467 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:28:00.902121  364467 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:28:00.902659  364467 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:28:00.919602  364467 ssh_runner.go:195] Run: systemctl --version
	I1213 10:28:00.919652  364467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:28:00.946869  364467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:28:01.067085  364467 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:28:01.067189  364467 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:28:01.102776  364467 cri.go:89] found id: "9773f1dabc6fdab993282d91bf08c5bfc6cdb97f43f4747d34fbeefb5a9e8428"
	I1213 10:28:01.102809  364467 cri.go:89] found id: "ad42e673ec298eaaa7db2c08f5f8920f71246374f8e3379263c303f8bec7be5f"
	I1213 10:28:01.102815  364467 cri.go:89] found id: "91959e0b370171d408bed0fa52d4779da678474a77f884e8f856aaa174adc963"
	I1213 10:28:01.102819  364467 cri.go:89] found id: "ade5c570c4dbe1b93fd2561bd1346b042babb17e2cf39dfd555bd5fd97e8b622"
	I1213 10:28:01.102823  364467 cri.go:89] found id: "7710a35bda17f7d94abb6d5449fdca661d858863ec7aaa9df850aa1ff0c8345a"
	I1213 10:28:01.102827  364467 cri.go:89] found id: "2cc901f4d3fb002e6510963d1c14958538efb9f5f9655d576a398653295bab78"
	I1213 10:28:01.102831  364467 cri.go:89] found id: "ead471b4c6339a21f7d4642f7382e3e17c6bc67840d5597c9f1ba7d03a90ad51"
	I1213 10:28:01.102834  364467 cri.go:89] found id: "59d844a8a4aeddcbc54c84b89e2b13932aafef3528c0c7ed2fe1d6977efc0da4"
	I1213 10:28:01.102837  364467 cri.go:89] found id: "0bcd4d507bd4ada1761081111aa585e0302721ff2aa31a88b8a4ed23ef769c46"
	I1213 10:28:01.102843  364467 cri.go:89] found id: "9df3579774fb7e75da64c468015d647c1c846c2c3a3661e9cad4e7625c077819"
	I1213 10:28:01.102846  364467 cri.go:89] found id: "f8f4b2d0d0ca01cb0298f234ed98f38bb35db421dd1ee0ecfa35f42af8a048ea"
	I1213 10:28:01.102849  364467 cri.go:89] found id: "82ec4f0d273933dd1ab10c4541bc8ece9fcf638d0fadf7561a9a044e0d84b3e3"
	I1213 10:28:01.102853  364467 cri.go:89] found id: "46a1e5bc6867179fb782aaa1b961e54bb568eba367df5af5c2b1a32cc3432bcf"
	I1213 10:28:01.102856  364467 cri.go:89] found id: "abd7fd4640572a8d631c9a4b53b0b54dbb8e15303b0aad7c22abe4c2fd31d2f9"
	I1213 10:28:01.102860  364467 cri.go:89] found id: "ca06350334c8245e57a53c1956fea31819d2cec020bfc2d72fdf601430141c8e"
	I1213 10:28:01.102868  364467 cri.go:89] found id: "d5c5cc43186b72930aa32f3cf24a96d8bf357cebf0358db4413edf761499d0af"
	I1213 10:28:01.102872  364467 cri.go:89] found id: "cc0f178df84bbe390a441a840f219d69e66c7fd3620de6752d3ee094c40cdd59"
	I1213 10:28:01.102885  364467 cri.go:89] found id: "40451bec4cc2625dade53eb6c1f0778cc9665d75785787a901b2ca8fe63f61db"
	I1213 10:28:01.102888  364467 cri.go:89] found id: "ca309cac66452ff14d4cade2b7a47b20ec31fc85df9461959e22811849d21fec"
	I1213 10:28:01.102891  364467 cri.go:89] found id: "051e9f414ee6e9ab31fd97afb8184da1ef222b1c4f7dd9b0735f3e6282f04624"
	I1213 10:28:01.102895  364467 cri.go:89] found id: "cf448155f622a981b7e826d6176c5a79e1d0b75a3b353485a3e5065aa49ad951"
	I1213 10:28:01.102899  364467 cri.go:89] found id: "76b7938d7fbe387df34dbe103158724b60d8446770ffc2084ed0c6d9dcca5419"
	I1213 10:28:01.102902  364467 cri.go:89] found id: "fe380896f1e4da12ca13e0f570ceba0fba5d8edef70e3de8c10f5159b3c36a8d"
	I1213 10:28:01.102905  364467 cri.go:89] found id: ""
	I1213 10:28:01.102958  364467 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:28:01.119057  364467 out.go:203] 
	W1213 10:28:01.122015  364467 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:28:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:28:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:28:01.122035  364467 out.go:285] * 
	* 
	W1213 10:28:01.127883  364467 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:28:01.130843  364467 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-543946 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.30s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.41s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-543946 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-543946 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-543946 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [cc63d740-051c-4de6-9de8-49ceb6ee2222] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [cc63d740-051c-4de6-9de8-49ceb6ee2222] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [cc63d740-051c-4de6-9de8-49ceb6ee2222] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.005459571s
addons_test.go:969: (dbg) Run:  kubectl --context addons-543946 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-543946 ssh "cat /opt/local-path-provisioner/pvc-e9ca4193-326e-4213-a679-d66e1f982d49_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-543946 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-543946 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-543946 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-543946 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (265.277984ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:28:01.712136  364591 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:28:01.713177  364591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:28:01.713199  364591 out.go:374] Setting ErrFile to fd 2...
	I1213 10:28:01.713206  364591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:28:01.713486  364591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:28:01.713804  364591 mustload.go:66] Loading cluster: addons-543946
	I1213 10:28:01.714188  364591 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:28:01.714208  364591 addons.go:622] checking whether the cluster is paused
	I1213 10:28:01.714321  364591 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:28:01.714338  364591 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:28:01.714857  364591 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:28:01.732341  364591 ssh_runner.go:195] Run: systemctl --version
	I1213 10:28:01.732399  364591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:28:01.750393  364591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:28:01.858800  364591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:28:01.858923  364591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:28:01.895932  364591 cri.go:89] found id: "9773f1dabc6fdab993282d91bf08c5bfc6cdb97f43f4747d34fbeefb5a9e8428"
	I1213 10:28:01.895955  364591 cri.go:89] found id: "ad42e673ec298eaaa7db2c08f5f8920f71246374f8e3379263c303f8bec7be5f"
	I1213 10:28:01.895960  364591 cri.go:89] found id: "91959e0b370171d408bed0fa52d4779da678474a77f884e8f856aaa174adc963"
	I1213 10:28:01.895963  364591 cri.go:89] found id: "ade5c570c4dbe1b93fd2561bd1346b042babb17e2cf39dfd555bd5fd97e8b622"
	I1213 10:28:01.895967  364591 cri.go:89] found id: "7710a35bda17f7d94abb6d5449fdca661d858863ec7aaa9df850aa1ff0c8345a"
	I1213 10:28:01.895970  364591 cri.go:89] found id: "2cc901f4d3fb002e6510963d1c14958538efb9f5f9655d576a398653295bab78"
	I1213 10:28:01.895974  364591 cri.go:89] found id: "ead471b4c6339a21f7d4642f7382e3e17c6bc67840d5597c9f1ba7d03a90ad51"
	I1213 10:28:01.895976  364591 cri.go:89] found id: "59d844a8a4aeddcbc54c84b89e2b13932aafef3528c0c7ed2fe1d6977efc0da4"
	I1213 10:28:01.895979  364591 cri.go:89] found id: "0bcd4d507bd4ada1761081111aa585e0302721ff2aa31a88b8a4ed23ef769c46"
	I1213 10:28:01.895985  364591 cri.go:89] found id: "9df3579774fb7e75da64c468015d647c1c846c2c3a3661e9cad4e7625c077819"
	I1213 10:28:01.895988  364591 cri.go:89] found id: "f8f4b2d0d0ca01cb0298f234ed98f38bb35db421dd1ee0ecfa35f42af8a048ea"
	I1213 10:28:01.895991  364591 cri.go:89] found id: "82ec4f0d273933dd1ab10c4541bc8ece9fcf638d0fadf7561a9a044e0d84b3e3"
	I1213 10:28:01.895994  364591 cri.go:89] found id: "46a1e5bc6867179fb782aaa1b961e54bb568eba367df5af5c2b1a32cc3432bcf"
	I1213 10:28:01.895997  364591 cri.go:89] found id: "abd7fd4640572a8d631c9a4b53b0b54dbb8e15303b0aad7c22abe4c2fd31d2f9"
	I1213 10:28:01.896001  364591 cri.go:89] found id: "ca06350334c8245e57a53c1956fea31819d2cec020bfc2d72fdf601430141c8e"
	I1213 10:28:01.896006  364591 cri.go:89] found id: "d5c5cc43186b72930aa32f3cf24a96d8bf357cebf0358db4413edf761499d0af"
	I1213 10:28:01.896013  364591 cri.go:89] found id: "cc0f178df84bbe390a441a840f219d69e66c7fd3620de6752d3ee094c40cdd59"
	I1213 10:28:01.896018  364591 cri.go:89] found id: "40451bec4cc2625dade53eb6c1f0778cc9665d75785787a901b2ca8fe63f61db"
	I1213 10:28:01.896021  364591 cri.go:89] found id: "ca309cac66452ff14d4cade2b7a47b20ec31fc85df9461959e22811849d21fec"
	I1213 10:28:01.896024  364591 cri.go:89] found id: "051e9f414ee6e9ab31fd97afb8184da1ef222b1c4f7dd9b0735f3e6282f04624"
	I1213 10:28:01.896029  364591 cri.go:89] found id: "cf448155f622a981b7e826d6176c5a79e1d0b75a3b353485a3e5065aa49ad951"
	I1213 10:28:01.896032  364591 cri.go:89] found id: "76b7938d7fbe387df34dbe103158724b60d8446770ffc2084ed0c6d9dcca5419"
	I1213 10:28:01.896036  364591 cri.go:89] found id: "fe380896f1e4da12ca13e0f570ceba0fba5d8edef70e3de8c10f5159b3c36a8d"
	I1213 10:28:01.896043  364591 cri.go:89] found id: ""
	I1213 10:28:01.896089  364591 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:28:01.914017  364591 out.go:203] 
	W1213 10:28:01.916969  364591 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:28:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:28:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:28:01.916990  364591 out.go:285] * 
	* 
	W1213 10:28:01.922569  364591 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:28:01.927667  364591 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-543946 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.41s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-8blxf" [3201a02e-fa9a-46d7-9d5d-b5a1c793e01a] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00386862s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-543946 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-543946 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (304.450403ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:27:53.284416  364145 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:27:53.285670  364145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:53.285682  364145 out.go:374] Setting ErrFile to fd 2...
	I1213 10:27:53.285688  364145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:53.285954  364145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:27:53.286270  364145 mustload.go:66] Loading cluster: addons-543946
	I1213 10:27:53.286678  364145 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:27:53.286700  364145 addons.go:622] checking whether the cluster is paused
	I1213 10:27:53.286814  364145 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:27:53.286831  364145 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:27:53.287391  364145 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:27:53.320299  364145 ssh_runner.go:195] Run: systemctl --version
	I1213 10:27:53.320374  364145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:27:53.348647  364145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:27:53.453882  364145 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:27:53.453960  364145 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:27:53.482168  364145 cri.go:89] found id: "9773f1dabc6fdab993282d91bf08c5bfc6cdb97f43f4747d34fbeefb5a9e8428"
	I1213 10:27:53.482189  364145 cri.go:89] found id: "ad42e673ec298eaaa7db2c08f5f8920f71246374f8e3379263c303f8bec7be5f"
	I1213 10:27:53.482195  364145 cri.go:89] found id: "91959e0b370171d408bed0fa52d4779da678474a77f884e8f856aaa174adc963"
	I1213 10:27:53.482198  364145 cri.go:89] found id: "ade5c570c4dbe1b93fd2561bd1346b042babb17e2cf39dfd555bd5fd97e8b622"
	I1213 10:27:53.482211  364145 cri.go:89] found id: "7710a35bda17f7d94abb6d5449fdca661d858863ec7aaa9df850aa1ff0c8345a"
	I1213 10:27:53.482215  364145 cri.go:89] found id: "2cc901f4d3fb002e6510963d1c14958538efb9f5f9655d576a398653295bab78"
	I1213 10:27:53.482219  364145 cri.go:89] found id: "ead471b4c6339a21f7d4642f7382e3e17c6bc67840d5597c9f1ba7d03a90ad51"
	I1213 10:27:53.482222  364145 cri.go:89] found id: "59d844a8a4aeddcbc54c84b89e2b13932aafef3528c0c7ed2fe1d6977efc0da4"
	I1213 10:27:53.482226  364145 cri.go:89] found id: "0bcd4d507bd4ada1761081111aa585e0302721ff2aa31a88b8a4ed23ef769c46"
	I1213 10:27:53.482232  364145 cri.go:89] found id: "9df3579774fb7e75da64c468015d647c1c846c2c3a3661e9cad4e7625c077819"
	I1213 10:27:53.482236  364145 cri.go:89] found id: "f8f4b2d0d0ca01cb0298f234ed98f38bb35db421dd1ee0ecfa35f42af8a048ea"
	I1213 10:27:53.482239  364145 cri.go:89] found id: "82ec4f0d273933dd1ab10c4541bc8ece9fcf638d0fadf7561a9a044e0d84b3e3"
	I1213 10:27:53.482242  364145 cri.go:89] found id: "46a1e5bc6867179fb782aaa1b961e54bb568eba367df5af5c2b1a32cc3432bcf"
	I1213 10:27:53.482245  364145 cri.go:89] found id: "abd7fd4640572a8d631c9a4b53b0b54dbb8e15303b0aad7c22abe4c2fd31d2f9"
	I1213 10:27:53.482253  364145 cri.go:89] found id: "ca06350334c8245e57a53c1956fea31819d2cec020bfc2d72fdf601430141c8e"
	I1213 10:27:53.482258  364145 cri.go:89] found id: "d5c5cc43186b72930aa32f3cf24a96d8bf357cebf0358db4413edf761499d0af"
	I1213 10:27:53.482261  364145 cri.go:89] found id: "cc0f178df84bbe390a441a840f219d69e66c7fd3620de6752d3ee094c40cdd59"
	I1213 10:27:53.482265  364145 cri.go:89] found id: "40451bec4cc2625dade53eb6c1f0778cc9665d75785787a901b2ca8fe63f61db"
	I1213 10:27:53.482268  364145 cri.go:89] found id: "ca309cac66452ff14d4cade2b7a47b20ec31fc85df9461959e22811849d21fec"
	I1213 10:27:53.482271  364145 cri.go:89] found id: "051e9f414ee6e9ab31fd97afb8184da1ef222b1c4f7dd9b0735f3e6282f04624"
	I1213 10:27:53.482276  364145 cri.go:89] found id: "cf448155f622a981b7e826d6176c5a79e1d0b75a3b353485a3e5065aa49ad951"
	I1213 10:27:53.482283  364145 cri.go:89] found id: "76b7938d7fbe387df34dbe103158724b60d8446770ffc2084ed0c6d9dcca5419"
	I1213 10:27:53.482286  364145 cri.go:89] found id: "fe380896f1e4da12ca13e0f570ceba0fba5d8edef70e3de8c10f5159b3c36a8d"
	I1213 10:27:53.482289  364145 cri.go:89] found id: ""
	I1213 10:27:53.482337  364145 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:27:53.499285  364145 out.go:203] 
	W1213 10:27:53.504306  364145 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:27:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:27:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:27:53.504347  364145 out.go:285] * 
	* 
	W1213 10:27:53.509933  364145 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:27:53.513325  364145 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-543946 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.31s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-gjksm" [44ed7cd7-681f-4cac-ae62-e2e35c80649f] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004020106s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-543946 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-543946 addons disable yakd --alsologtostderr -v=1: exit status 11 (269.029633ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:27:46.989245  364032 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:27:46.990090  364032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:46.990128  364032 out.go:374] Setting ErrFile to fd 2...
	I1213 10:27:46.990151  364032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:27:46.990488  364032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:27:46.990855  364032 mustload.go:66] Loading cluster: addons-543946
	I1213 10:27:46.991335  364032 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:27:46.991376  364032 addons.go:622] checking whether the cluster is paused
	I1213 10:27:46.991561  364032 config.go:182] Loaded profile config "addons-543946": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:27:46.991596  364032 host.go:66] Checking if "addons-543946" exists ...
	I1213 10:27:46.992175  364032 cli_runner.go:164] Run: docker container inspect addons-543946 --format={{.State.Status}}
	I1213 10:27:47.017724  364032 ssh_runner.go:195] Run: systemctl --version
	I1213 10:27:47.017793  364032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-543946
	I1213 10:27:47.037059  364032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/addons-543946/id_rsa Username:docker}
	I1213 10:27:47.146095  364032 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:27:47.146198  364032 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:27:47.174994  364032 cri.go:89] found id: "9773f1dabc6fdab993282d91bf08c5bfc6cdb97f43f4747d34fbeefb5a9e8428"
	I1213 10:27:47.175018  364032 cri.go:89] found id: "ad42e673ec298eaaa7db2c08f5f8920f71246374f8e3379263c303f8bec7be5f"
	I1213 10:27:47.175022  364032 cri.go:89] found id: "91959e0b370171d408bed0fa52d4779da678474a77f884e8f856aaa174adc963"
	I1213 10:27:47.175026  364032 cri.go:89] found id: "ade5c570c4dbe1b93fd2561bd1346b042babb17e2cf39dfd555bd5fd97e8b622"
	I1213 10:27:47.175029  364032 cri.go:89] found id: "7710a35bda17f7d94abb6d5449fdca661d858863ec7aaa9df850aa1ff0c8345a"
	I1213 10:27:47.175033  364032 cri.go:89] found id: "2cc901f4d3fb002e6510963d1c14958538efb9f5f9655d576a398653295bab78"
	I1213 10:27:47.175036  364032 cri.go:89] found id: "ead471b4c6339a21f7d4642f7382e3e17c6bc67840d5597c9f1ba7d03a90ad51"
	I1213 10:27:47.175039  364032 cri.go:89] found id: "59d844a8a4aeddcbc54c84b89e2b13932aafef3528c0c7ed2fe1d6977efc0da4"
	I1213 10:27:47.175042  364032 cri.go:89] found id: "0bcd4d507bd4ada1761081111aa585e0302721ff2aa31a88b8a4ed23ef769c46"
	I1213 10:27:47.175051  364032 cri.go:89] found id: "9df3579774fb7e75da64c468015d647c1c846c2c3a3661e9cad4e7625c077819"
	I1213 10:27:47.175055  364032 cri.go:89] found id: "f8f4b2d0d0ca01cb0298f234ed98f38bb35db421dd1ee0ecfa35f42af8a048ea"
	I1213 10:27:47.175058  364032 cri.go:89] found id: "82ec4f0d273933dd1ab10c4541bc8ece9fcf638d0fadf7561a9a044e0d84b3e3"
	I1213 10:27:47.175062  364032 cri.go:89] found id: "46a1e5bc6867179fb782aaa1b961e54bb568eba367df5af5c2b1a32cc3432bcf"
	I1213 10:27:47.175066  364032 cri.go:89] found id: "abd7fd4640572a8d631c9a4b53b0b54dbb8e15303b0aad7c22abe4c2fd31d2f9"
	I1213 10:27:47.175069  364032 cri.go:89] found id: "ca06350334c8245e57a53c1956fea31819d2cec020bfc2d72fdf601430141c8e"
	I1213 10:27:47.175073  364032 cri.go:89] found id: "d5c5cc43186b72930aa32f3cf24a96d8bf357cebf0358db4413edf761499d0af"
	I1213 10:27:47.175076  364032 cri.go:89] found id: "cc0f178df84bbe390a441a840f219d69e66c7fd3620de6752d3ee094c40cdd59"
	I1213 10:27:47.175081  364032 cri.go:89] found id: "40451bec4cc2625dade53eb6c1f0778cc9665d75785787a901b2ca8fe63f61db"
	I1213 10:27:47.175088  364032 cri.go:89] found id: "ca309cac66452ff14d4cade2b7a47b20ec31fc85df9461959e22811849d21fec"
	I1213 10:27:47.175095  364032 cri.go:89] found id: "051e9f414ee6e9ab31fd97afb8184da1ef222b1c4f7dd9b0735f3e6282f04624"
	I1213 10:27:47.175100  364032 cri.go:89] found id: "cf448155f622a981b7e826d6176c5a79e1d0b75a3b353485a3e5065aa49ad951"
	I1213 10:27:47.175103  364032 cri.go:89] found id: "76b7938d7fbe387df34dbe103158724b60d8446770ffc2084ed0c6d9dcca5419"
	I1213 10:27:47.175106  364032 cri.go:89] found id: "fe380896f1e4da12ca13e0f570ceba0fba5d8edef70e3de8c10f5159b3c36a8d"
	I1213 10:27:47.175109  364032 cri.go:89] found id: ""
	I1213 10:27:47.175159  364032 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 10:27:47.191670  364032 out.go:203] 
	W1213 10:27:47.194599  364032 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:27:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:27:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 10:27:47.194630  364032 out.go:285] * 
	* 
	W1213 10:27:47.200315  364032 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:27:47.203177  364032 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-543946 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-407525 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1213 10:35:11.795646  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:37:27.931287  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:37:55.637071  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:39:06.647710  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:39:06.654085  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:39:06.665520  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:39:06.687043  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:39:06.728496  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:39:06.810007  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:39:06.971622  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:39:07.293377  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:39:07.935596  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:39:09.217247  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:39:11.779689  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:39:16.901216  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:39:27.142772  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:39:47.624178  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:40:28.586068  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:41:50.507591  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:42:27.930457  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-407525 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.494644968s)

                                                
                                                
-- stdout --
	* [functional-407525] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-407525" primary control-plane node in "functional-407525" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Found network options:
	  - HTTP_PROXY=localhost:38233
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:38233 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-407525 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-407525 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001125602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000694296s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000694296s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-407525 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-407525
helpers_test.go:244: (dbg) docker inspect functional-407525:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	        "Created": "2025-12-13T10:34:59.162458661Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 385126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:34:59.230276401Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hostname",
	        "HostsPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hosts",
	        "LogPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7-json.log",
	        "Name": "/functional-407525",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-407525:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-407525",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	                "LowerDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-407525",
	                "Source": "/var/lib/docker/volumes/functional-407525/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-407525",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-407525",
	                "name.minikube.sigs.k8s.io": "functional-407525",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb8c72e3de62f4751cebe2c5a489ec3040a7f771c4c912b4414d5eb26c67d8e4",
	            "SandboxKey": "/var/run/docker/netns/fb8c72e3de62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-407525": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:c5:1d:c8:5d:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8bb3fce07852261971da0e26f4e28c90471b6da820443a0b657c0bf09d2f7042",
	                    "EndpointID": "3a907b06ccc449fc18f0cf71710374046514d7011757e3e81bb1c73b267fe8c9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-407525",
	                        "7fc3d6bd328a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525: exit status 6 (328.4416ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:43:15.044744  390301 status.go:458] kubeconfig endpoint: get endpoint: "functional-407525" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-371413 /tmp/TestFunctionalparallelMountCmdspecific-port1938572229/001:/mount-9p --alsologtostderr -v=1 --port 46464                 │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ ssh            │ functional-371413 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ ssh            │ functional-371413 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ ssh            │ functional-371413 ssh -- ls -la /mount-9p                                                                                                         │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ ssh            │ functional-371413 ssh sudo umount -f /mount-9p                                                                                                    │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ mount          │ -p functional-371413 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3564307197/001:/mount2 --alsologtostderr -v=1                                │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ mount          │ -p functional-371413 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3564307197/001:/mount1 --alsologtostderr -v=1                                │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ mount          │ -p functional-371413 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3564307197/001:/mount3 --alsologtostderr -v=1                                │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ ssh            │ functional-371413 ssh findmnt -T /mount1                                                                                                          │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ ssh            │ functional-371413 ssh findmnt -T /mount1                                                                                                          │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ ssh            │ functional-371413 ssh findmnt -T /mount2                                                                                                          │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ ssh            │ functional-371413 ssh findmnt -T /mount3                                                                                                          │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ mount          │ -p functional-371413 --kill=true                                                                                                                  │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ update-context │ functional-371413 update-context --alsologtostderr -v=2                                                                                           │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ update-context │ functional-371413 update-context --alsologtostderr -v=2                                                                                           │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ update-context │ functional-371413 update-context --alsologtostderr -v=2                                                                                           │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image          │ functional-371413 image ls --format short --alsologtostderr                                                                                       │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image          │ functional-371413 image ls --format yaml --alsologtostderr                                                                                        │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ ssh            │ functional-371413 ssh pgrep buildkitd                                                                                                             │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ image          │ functional-371413 image build -t localhost/my-image:functional-371413 testdata/build --alsologtostderr                                            │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image          │ functional-371413 image ls                                                                                                                        │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image          │ functional-371413 image ls --format json --alsologtostderr                                                                                        │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image          │ functional-371413 image ls --format table --alsologtostderr                                                                                       │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ delete         │ -p functional-371413                                                                                                                              │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ start          │ -p functional-407525 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:34:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:34:54.263828  384738 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:34:54.263925  384738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:34:54.263929  384738 out.go:374] Setting ErrFile to fd 2...
	I1213 10:34:54.263933  384738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:34:54.264300  384738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:34:54.264791  384738 out.go:368] Setting JSON to false
	I1213 10:34:54.265644  384738 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8247,"bootTime":1765613848,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:34:54.265725  384738 start.go:143] virtualization:  
	I1213 10:34:54.270457  384738 out.go:179] * [functional-407525] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:34:54.275331  384738 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:34:54.275452  384738 notify.go:221] Checking for updates...
	I1213 10:34:54.282286  384738 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:34:54.285707  384738 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:34:54.289063  384738 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 10:34:54.292165  384738 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:34:54.295301  384738 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:34:54.298649  384738 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:34:54.329110  384738 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:34:54.329216  384738 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:34:54.393764  384738 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-13 10:34:54.384489725 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:34:54.393863  384738 docker.go:319] overlay module found
	I1213 10:34:54.397177  384738 out.go:179] * Using the docker driver based on user configuration
	I1213 10:34:54.400141  384738 start.go:309] selected driver: docker
	I1213 10:34:54.400151  384738 start.go:927] validating driver "docker" against <nil>
	I1213 10:34:54.400164  384738 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:34:54.400927  384738 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:34:54.457349  384738 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-13 10:34:54.447779884 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:34:54.457505  384738 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:34:54.457726  384738 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:34:54.460773  384738 out.go:179] * Using Docker driver with root privileges
	I1213 10:34:54.463680  384738 cni.go:84] Creating CNI manager for ""
	I1213 10:34:54.463746  384738 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:34:54.463753  384738 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 10:34:54.463838  384738 start.go:353] cluster config:
	{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:34:54.468979  384738 out.go:179] * Starting "functional-407525" primary control-plane node in "functional-407525" cluster
	I1213 10:34:54.471840  384738 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:34:54.474882  384738 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:34:54.477887  384738 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:34:54.477958  384738 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:34:54.477967  384738 cache.go:65] Caching tarball of preloaded images
	I1213 10:34:54.477970  384738 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:34:54.478069  384738 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:34:54.478078  384738 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 10:34:54.478406  384738 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/config.json ...
	I1213 10:34:54.478423  384738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/config.json: {Name:mkaed125bcabacbbe4210616ca3e105a07884a22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:34:54.498011  384738 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:34:54.498022  384738 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:34:54.498035  384738 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:34:54.498066  384738 start.go:360] acquireMachinesLock for functional-407525: {Name:mkb9a6ddeb0e93e626919e03dc3c989f045e07da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:34:54.498165  384738 start.go:364] duration metric: took 85.695µs to acquireMachinesLock for "functional-407525"
	I1213 10:34:54.498189  384738 start.go:93] Provisioning new machine with config: &{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:34:54.498262  384738 start.go:125] createHost starting for "" (driver="docker")
	I1213 10:34:54.501829  384738 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1213 10:34:54.502117  384738 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:38233 to docker env.
	I1213 10:34:54.502141  384738 start.go:159] libmachine.API.Create for "functional-407525" (driver="docker")
	I1213 10:34:54.502162  384738 client.go:173] LocalClient.Create starting
	I1213 10:34:54.502228  384738 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem
	I1213 10:34:54.502262  384738 main.go:143] libmachine: Decoding PEM data...
	I1213 10:34:54.502276  384738 main.go:143] libmachine: Parsing certificate...
	I1213 10:34:54.502324  384738 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem
	I1213 10:34:54.502339  384738 main.go:143] libmachine: Decoding PEM data...
	I1213 10:34:54.502349  384738 main.go:143] libmachine: Parsing certificate...
	I1213 10:34:54.502704  384738 cli_runner.go:164] Run: docker network inspect functional-407525 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 10:34:54.519816  384738 cli_runner.go:211] docker network inspect functional-407525 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 10:34:54.519909  384738 network_create.go:284] running [docker network inspect functional-407525] to gather additional debugging logs...
	I1213 10:34:54.519925  384738 cli_runner.go:164] Run: docker network inspect functional-407525
	W1213 10:34:54.536077  384738 cli_runner.go:211] docker network inspect functional-407525 returned with exit code 1
	I1213 10:34:54.536111  384738 network_create.go:287] error running [docker network inspect functional-407525]: docker network inspect functional-407525: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-407525 not found
	I1213 10:34:54.536122  384738 network_create.go:289] output of [docker network inspect functional-407525]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-407525 not found
	
	** /stderr **
	I1213 10:34:54.536221  384738 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:34:54.553911  384738 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019316c0}
	I1213 10:34:54.553941  384738 network_create.go:124] attempt to create docker network functional-407525 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 10:34:54.553993  384738 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-407525 functional-407525
	I1213 10:34:54.614758  384738 network_create.go:108] docker network functional-407525 192.168.49.0/24 created
	I1213 10:34:54.614781  384738 kic.go:121] calculated static IP "192.168.49.2" for the "functional-407525" container
	I1213 10:34:54.614850  384738 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 10:34:54.629316  384738 cli_runner.go:164] Run: docker volume create functional-407525 --label name.minikube.sigs.k8s.io=functional-407525 --label created_by.minikube.sigs.k8s.io=true
	I1213 10:34:54.646902  384738 oci.go:103] Successfully created a docker volume functional-407525
	I1213 10:34:54.646993  384738 cli_runner.go:164] Run: docker run --rm --name functional-407525-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-407525 --entrypoint /usr/bin/test -v functional-407525:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 10:34:55.203979  384738 oci.go:107] Successfully prepared a docker volume functional-407525
	I1213 10:34:55.204041  384738 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:34:55.204049  384738 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 10:34:55.204116  384738 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-407525:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 10:34:59.090308  384738 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-407525:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.88615688s)
	I1213 10:34:59.090329  384738 kic.go:203] duration metric: took 3.886276052s to extract preloaded images to volume ...
	W1213 10:34:59.090483  384738 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 10:34:59.090577  384738 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 10:34:59.147711  384738 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-407525 --name functional-407525 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-407525 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-407525 --network functional-407525 --ip 192.168.49.2 --volume functional-407525:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 10:34:59.463410  384738 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Running}}
	I1213 10:34:59.487221  384738 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:34:59.512482  384738 cli_runner.go:164] Run: docker exec functional-407525 stat /var/lib/dpkg/alternatives/iptables
	I1213 10:34:59.569920  384738 oci.go:144] the created container "functional-407525" has a running status.
	I1213 10:34:59.569938  384738 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa...
	I1213 10:34:59.659016  384738 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 10:34:59.682208  384738 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:34:59.702857  384738 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 10:34:59.702880  384738 kic_runner.go:114] Args: [docker exec --privileged functional-407525 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 10:34:59.757941  384738 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:34:59.792191  384738 machine.go:94] provisionDockerMachine start ...
	I1213 10:34:59.792270  384738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:34:59.823429  384738 main.go:143] libmachine: Using SSH client type: native
	I1213 10:34:59.823878  384738 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:34:59.823886  384738 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:34:59.824530  384738 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44170->127.0.0.1:33158: read: connection reset by peer
	I1213 10:35:02.975231  384738 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-407525
	
	I1213 10:35:02.975245  384738 ubuntu.go:182] provisioning hostname "functional-407525"
	I1213 10:35:02.975306  384738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:35:02.992961  384738 main.go:143] libmachine: Using SSH client type: native
	I1213 10:35:02.993273  384738 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:35:02.993282  384738 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-407525 && echo "functional-407525" | sudo tee /etc/hostname
	I1213 10:35:03.156830  384738 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-407525
	
	I1213 10:35:03.156930  384738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:35:03.175700  384738 main.go:143] libmachine: Using SSH client type: native
	I1213 10:35:03.176011  384738 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:35:03.176025  384738 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-407525' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-407525/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-407525' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:35:03.328069  384738 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:35:03.328085  384738 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 10:35:03.328102  384738 ubuntu.go:190] setting up certificates
	I1213 10:35:03.328112  384738 provision.go:84] configureAuth start
	I1213 10:35:03.328174  384738 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-407525
	I1213 10:35:03.351187  384738 provision.go:143] copyHostCerts
	I1213 10:35:03.351255  384738 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 10:35:03.351264  384738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 10:35:03.351343  384738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 10:35:03.351504  384738 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 10:35:03.351508  384738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 10:35:03.351712  384738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 10:35:03.351799  384738 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 10:35:03.351803  384738 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 10:35:03.351829  384738 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 10:35:03.351920  384738 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.functional-407525 san=[127.0.0.1 192.168.49.2 functional-407525 localhost minikube]
	I1213 10:35:03.589308  384738 provision.go:177] copyRemoteCerts
	I1213 10:35:03.589362  384738 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:35:03.589402  384738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:35:03.607747  384738 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:35:03.711361  384738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:35:03.729356  384738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:35:03.747635  384738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:35:03.765124  384738 provision.go:87] duration metric: took 436.989135ms to configureAuth
	I1213 10:35:03.765143  384738 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:35:03.765330  384738 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:35:03.765430  384738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:35:03.783257  384738 main.go:143] libmachine: Using SSH client type: native
	I1213 10:35:03.783598  384738 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:35:03.783610  384738 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:35:04.083429  384738 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:35:04.083445  384738 machine.go:97] duration metric: took 4.291240438s to provisionDockerMachine
	I1213 10:35:04.083465  384738 client.go:176] duration metric: took 9.58129776s to LocalClient.Create
	I1213 10:35:04.083487  384738 start.go:167] duration metric: took 9.581347008s to libmachine.API.Create "functional-407525"
	I1213 10:35:04.083494  384738 start.go:293] postStartSetup for "functional-407525" (driver="docker")
	I1213 10:35:04.083538  384738 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:35:04.083613  384738 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:35:04.083656  384738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:35:04.103156  384738 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:35:04.211758  384738 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:35:04.215310  384738 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:35:04.215328  384738 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:35:04.215338  384738 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 10:35:04.215395  384738 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 10:35:04.215490  384738 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 10:35:04.215594  384738 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts -> hosts in /etc/test/nested/copy/356328
	I1213 10:35:04.215648  384738 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/356328
	I1213 10:35:04.223358  384738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 10:35:04.242050  384738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts --> /etc/test/nested/copy/356328/hosts (40 bytes)
	I1213 10:35:04.260365  384738 start.go:296] duration metric: took 176.858004ms for postStartSetup
	I1213 10:35:04.260740  384738 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-407525
	I1213 10:35:04.277924  384738 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/config.json ...
	I1213 10:35:04.278223  384738 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:35:04.278268  384738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:35:04.296195  384738 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:35:04.400537  384738 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:35:04.405355  384738 start.go:128] duration metric: took 9.907077565s to createHost
	I1213 10:35:04.405372  384738 start.go:83] releasing machines lock for "functional-407525", held for 9.907199165s
	I1213 10:35:04.405453  384738 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-407525
	I1213 10:35:04.424982  384738 out.go:179] * Found network options:
	I1213 10:35:04.427893  384738 out.go:179]   - HTTP_PROXY=localhost:38233
	W1213 10:35:04.430786  384738 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1213 10:35:04.433618  384738 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1213 10:35:04.436518  384738 ssh_runner.go:195] Run: cat /version.json
	I1213 10:35:04.436569  384738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:35:04.436593  384738 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:35:04.436660  384738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:35:04.464944  384738 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:35:04.467345  384738 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:35:04.659501  384738 ssh_runner.go:195] Run: systemctl --version
	I1213 10:35:04.666171  384738 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:35:04.702624  384738 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:35:04.707556  384738 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:35:04.707628  384738 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:35:04.737269  384738 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 10:35:04.737283  384738 start.go:496] detecting cgroup driver to use...
	I1213 10:35:04.737316  384738 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:35:04.737361  384738 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:35:04.754734  384738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:35:04.767578  384738 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:35:04.767631  384738 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:35:04.785470  384738 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:35:04.804405  384738 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:35:04.928558  384738 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:35:05.069742  384738 docker.go:234] disabling docker service ...
	I1213 10:35:05.069798  384738 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:35:05.091671  384738 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:35:05.105418  384738 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:35:05.218616  384738 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:35:05.335389  384738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:35:05.349781  384738 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:35:05.365272  384738 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:35:05.365338  384738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:35:05.374453  384738 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:35:05.374524  384738 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:35:05.383626  384738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:35:05.392156  384738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:35:05.401149  384738 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:35:05.409096  384738 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:35:05.417884  384738 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:35:05.431440  384738 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:35:05.440617  384738 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:35:05.448061  384738 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:35:05.455445  384738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:35:05.571568  384738 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:35:05.739160  384738 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:35:05.739240  384738 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:35:05.743181  384738 start.go:564] Will wait 60s for crictl version
	I1213 10:35:05.743233  384738 ssh_runner.go:195] Run: which crictl
	I1213 10:35:05.746781  384738 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:35:05.771112  384738 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:35:05.771194  384738 ssh_runner.go:195] Run: crio --version
	I1213 10:35:05.800887  384738 ssh_runner.go:195] Run: crio --version
	I1213 10:35:05.833016  384738 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 10:35:05.835724  384738 cli_runner.go:164] Run: docker network inspect functional-407525 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:35:05.851657  384738 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:35:05.855474  384738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:35:05.865869  384738 kubeadm.go:884] updating cluster {Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:35:05.865986  384738 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:35:05.866039  384738 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:35:05.898817  384738 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:35:05.898828  384738 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:35:05.898888  384738 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:35:05.923362  384738 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:35:05.923374  384738 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:35:05.923381  384738 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 10:35:05.923470  384738 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-407525 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:35:05.923565  384738 ssh_runner.go:195] Run: crio config
	I1213 10:35:06.000860  384738 cni.go:84] Creating CNI manager for ""
	I1213 10:35:06.000870  384738 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:35:06.000891  384738 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:35:06.000911  384738 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-407525 NodeName:functional-407525 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:35:06.001031  384738 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-407525"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:35:06.001099  384738 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:35:06.016530  384738 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:35:06.016592  384738 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:35:06.025014  384738 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 10:35:06.038316  384738 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:35:06.051472  384738 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 10:35:06.064971  384738 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:35:06.068569  384738 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 10:35:06.078581  384738 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:35:06.201292  384738 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:35:06.216995  384738 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525 for IP: 192.168.49.2
	I1213 10:35:06.217005  384738 certs.go:195] generating shared ca certs ...
	I1213 10:35:06.217020  384738 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:35:06.217165  384738 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 10:35:06.217211  384738 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 10:35:06.217217  384738 certs.go:257] generating profile certs ...
	I1213 10:35:06.217272  384738 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key
	I1213 10:35:06.217282  384738 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt with IP's: []
	I1213 10:35:06.441314  384738 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt ...
	I1213 10:35:06.441330  384738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: {Name:mkd7add43825847693a4bd74bc6ae11c16f12490 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:35:06.441541  384738 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key ...
	I1213 10:35:06.441551  384738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key: {Name:mkd3547085de631b2a845460542425744eb9e63d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:35:06.441654  384738 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key.2185ee04
	I1213 10:35:06.441665  384738 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.crt.2185ee04 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 10:35:06.637162  384738 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.crt.2185ee04 ...
	I1213 10:35:06.637176  384738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.crt.2185ee04: {Name:mka0a6d430c2a07c98f9ebaea7f229afcdd25e96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:35:06.637362  384738 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key.2185ee04 ...
	I1213 10:35:06.637376  384738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key.2185ee04: {Name:mkf803008bb3ae9970366b96a3923290dc52e03d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:35:06.637459  384738 certs.go:382] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.crt.2185ee04 -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.crt
	I1213 10:35:06.637534  384738 certs.go:386] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key.2185ee04 -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key
	I1213 10:35:06.637595  384738 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key
	I1213 10:35:06.637607  384738 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.crt with IP's: []
	I1213 10:35:07.188663  384738 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.crt ...
	I1213 10:35:07.188691  384738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.crt: {Name:mkf703e53ee7fb5444a228a9890ba905b97537e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:35:07.188931  384738 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key ...
	I1213 10:35:07.188941  384738 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key: {Name:mk618202f269374767555cf80bd4495ea65bd5b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:35:07.189168  384738 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 10:35:07.189217  384738 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 10:35:07.189232  384738 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:35:07.189267  384738 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:35:07.189301  384738 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:35:07.189326  384738 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 10:35:07.189377  384738 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 10:35:07.190088  384738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:35:07.209295  384738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:35:07.227419  384738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:35:07.245645  384738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:35:07.262474  384738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:35:07.279743  384738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:35:07.296572  384738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:35:07.313812  384738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 10:35:07.330541  384738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 10:35:07.347481  384738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:35:07.363964  384738 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 10:35:07.380967  384738 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:35:07.393415  384738 ssh_runner.go:195] Run: openssl version
	I1213 10:35:07.399427  384738 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 10:35:07.406922  384738 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 10:35:07.414253  384738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 10:35:07.417690  384738 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 10:35:07.417747  384738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 10:35:07.458319  384738 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:35:07.465792  384738 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/356328.pem /etc/ssl/certs/51391683.0
	I1213 10:35:07.473122  384738 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 10:35:07.480331  384738 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 10:35:07.487817  384738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 10:35:07.491569  384738 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 10:35:07.491627  384738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 10:35:07.532924  384738 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:35:07.540339  384738 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3563282.pem /etc/ssl/certs/3ec20f2e.0
	I1213 10:35:07.547398  384738 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:35:07.554695  384738 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:35:07.561806  384738 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:35:07.565439  384738 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:35:07.565491  384738 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:35:07.605831  384738 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:35:07.613299  384738 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 10:35:07.620633  384738 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:35:07.624242  384738 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 10:35:07.624285  384738 kubeadm.go:401] StartCluster: {Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:35:07.624352  384738 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:35:07.624416  384738 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:35:07.651935  384738 cri.go:89] found id: ""
	I1213 10:35:07.651994  384738 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:35:07.659854  384738 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:35:07.667564  384738 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:35:07.667620  384738 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:35:07.675319  384738 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:35:07.675329  384738 kubeadm.go:158] found existing configuration files:
	
	I1213 10:35:07.675390  384738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:35:07.683104  384738 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:35:07.683171  384738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:35:07.690382  384738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:35:07.697846  384738 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:35:07.697922  384738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:35:07.705262  384738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:35:07.712951  384738 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:35:07.713003  384738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:35:07.720083  384738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:35:07.727581  384738 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:35:07.727637  384738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:35:07.735106  384738 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:35:07.776248  384738 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:35:07.776852  384738 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:35:07.866517  384738 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:35:07.866580  384738 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:35:07.866620  384738 kubeadm.go:319] OS: Linux
	I1213 10:35:07.866665  384738 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:35:07.866711  384738 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:35:07.866757  384738 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:35:07.866804  384738 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:35:07.866858  384738 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:35:07.866906  384738 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:35:07.866952  384738 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:35:07.866999  384738 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:35:07.867043  384738 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:35:07.940012  384738 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:35:07.940156  384738 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:35:07.940284  384738 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:35:07.947948  384738 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:35:07.954310  384738 out.go:252]   - Generating certificates and keys ...
	I1213 10:35:07.954412  384738 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:35:07.954480  384738 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:35:08.223358  384738 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 10:35:08.840801  384738 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 10:35:08.952036  384738 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 10:35:09.025372  384738 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 10:35:09.103855  384738 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 10:35:09.104018  384738 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-407525 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 10:35:09.278538  384738 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 10:35:09.279169  384738 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-407525 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 10:35:09.655830  384738 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 10:35:09.846192  384738 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 10:35:10.424610  384738 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 10:35:10.424918  384738 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:35:10.655498  384738 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:35:10.912108  384738 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:35:11.054017  384738 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:35:11.429060  384738 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:35:11.628370  384738 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:35:11.629003  384738 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:35:11.631665  384738 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:35:11.635176  384738 out.go:252]   - Booting up control plane ...
	I1213 10:35:11.635289  384738 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:35:11.635372  384738 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:35:11.635443  384738 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:35:11.650967  384738 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:35:11.651218  384738 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:35:11.659623  384738 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:35:11.659715  384738 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:35:11.659754  384738 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:35:11.786516  384738 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:35:11.786629  384738 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:39:11.787607  384738 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001125602s
	I1213 10:39:11.787632  384738 kubeadm.go:319] 
	I1213 10:39:11.787730  384738 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:39:11.787927  384738 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:39:11.788106  384738 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:39:11.788114  384738 kubeadm.go:319] 
	I1213 10:39:11.788295  384738 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:39:11.788578  384738 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:39:11.788630  384738 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:39:11.788634  384738 kubeadm.go:319] 
	I1213 10:39:11.796822  384738 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:39:11.797269  384738 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:39:11.797386  384738 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:39:11.797645  384738 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:39:11.797650  384738 kubeadm.go:319] 
	I1213 10:39:11.797722  384738 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 10:39:11.798094  384738 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-407525 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-407525 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001125602s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 10:39:11.798191  384738 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 10:39:12.215684  384738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:39:12.228036  384738 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:39:12.228091  384738 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:39:12.235789  384738 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:39:12.235797  384738 kubeadm.go:158] found existing configuration files:
	
	I1213 10:39:12.235849  384738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:39:12.243490  384738 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:39:12.243608  384738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:39:12.250803  384738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:39:12.258401  384738 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:39:12.258455  384738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:39:12.265782  384738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:39:12.273240  384738 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:39:12.273291  384738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:39:12.280537  384738 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:39:12.288127  384738 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:39:12.288189  384738 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:39:12.295712  384738 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:39:12.333908  384738 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:39:12.334006  384738 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:39:12.409759  384738 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:39:12.409834  384738 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:39:12.409871  384738 kubeadm.go:319] OS: Linux
	I1213 10:39:12.409918  384738 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:39:12.409968  384738 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:39:12.410018  384738 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:39:12.410068  384738 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:39:12.410118  384738 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:39:12.410177  384738 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:39:12.410225  384738 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:39:12.410275  384738 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:39:12.410323  384738 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:39:12.478813  384738 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:39:12.478911  384738 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:39:12.478996  384738 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:39:12.491941  384738 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:39:12.496993  384738 out.go:252]   - Generating certificates and keys ...
	I1213 10:39:12.497084  384738 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:39:12.497154  384738 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:39:12.497236  384738 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:39:12.497302  384738 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:39:12.497375  384738 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:39:12.497434  384738 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:39:12.497504  384738 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:39:12.497579  384738 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:39:12.497661  384738 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:39:12.497739  384738 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:39:12.497784  384738 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:39:12.497847  384738 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:39:12.786925  384738 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:39:13.090409  384738 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:39:13.250219  384738 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:39:13.819297  384738 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:39:14.108032  384738 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:39:14.108693  384738 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:39:14.111271  384738 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:39:14.114660  384738 out.go:252]   - Booting up control plane ...
	I1213 10:39:14.114762  384738 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:39:14.114839  384738 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:39:14.114905  384738 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:39:14.129887  384738 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:39:14.129989  384738 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:39:14.137669  384738 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:39:14.138246  384738 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:39:14.138485  384738 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:39:14.277618  384738 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:39:14.277726  384738 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:43:14.278602  384738 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000694296s
	I1213 10:43:14.278627  384738 kubeadm.go:319] 
	I1213 10:43:14.278724  384738 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:43:14.278921  384738 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:43:14.279100  384738 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:43:14.279108  384738 kubeadm.go:319] 
	I1213 10:43:14.279289  384738 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:43:14.279612  384738 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:43:14.279666  384738 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:43:14.279670  384738 kubeadm.go:319] 
	I1213 10:43:14.284594  384738 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:43:14.285008  384738 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:43:14.285116  384738 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:43:14.285351  384738 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:43:14.285356  384738 kubeadm.go:319] 
	I1213 10:43:14.285424  384738 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 10:43:14.285475  384738 kubeadm.go:403] duration metric: took 8m6.661194179s to StartCluster
	I1213 10:43:14.285522  384738 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:43:14.285672  384738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:43:14.311709  384738 cri.go:89] found id: ""
	I1213 10:43:14.311734  384738 logs.go:282] 0 containers: []
	W1213 10:43:14.311741  384738 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:43:14.311747  384738 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:43:14.311803  384738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:43:14.338280  384738 cri.go:89] found id: ""
	I1213 10:43:14.338295  384738 logs.go:282] 0 containers: []
	W1213 10:43:14.338303  384738 logs.go:284] No container was found matching "etcd"
	I1213 10:43:14.338309  384738 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:43:14.338370  384738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:43:14.362774  384738 cri.go:89] found id: ""
	I1213 10:43:14.362787  384738 logs.go:282] 0 containers: []
	W1213 10:43:14.362794  384738 logs.go:284] No container was found matching "coredns"
	I1213 10:43:14.362800  384738 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:43:14.362855  384738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:43:14.388614  384738 cri.go:89] found id: ""
	I1213 10:43:14.388628  384738 logs.go:282] 0 containers: []
	W1213 10:43:14.388635  384738 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:43:14.388640  384738 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:43:14.388697  384738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:43:14.413703  384738 cri.go:89] found id: ""
	I1213 10:43:14.413717  384738 logs.go:282] 0 containers: []
	W1213 10:43:14.413725  384738 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:43:14.413731  384738 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:43:14.413789  384738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:43:14.439114  384738 cri.go:89] found id: ""
	I1213 10:43:14.439128  384738 logs.go:282] 0 containers: []
	W1213 10:43:14.439135  384738 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:43:14.439141  384738 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:43:14.439197  384738 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:43:14.466200  384738 cri.go:89] found id: ""
	I1213 10:43:14.466214  384738 logs.go:282] 0 containers: []
	W1213 10:43:14.466231  384738 logs.go:284] No container was found matching "kindnet"
	I1213 10:43:14.466240  384738 logs.go:123] Gathering logs for kubelet ...
	I1213 10:43:14.466251  384738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:43:14.531716  384738 logs.go:123] Gathering logs for dmesg ...
	I1213 10:43:14.531735  384738 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:43:14.547306  384738 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:43:14.547322  384738 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:43:14.621256  384738 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:43:14.612706    4848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:43:14.613428    4848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:43:14.615131    4848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:43:14.615672    4848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:43:14.617284    4848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:43:14.612706    4848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:43:14.613428    4848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:43:14.615131    4848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:43:14.615672    4848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:43:14.617284    4848 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:43:14.621270  384738 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:43:14.621280  384738 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:43:14.652409  384738 logs.go:123] Gathering logs for container status ...
	I1213 10:43:14.652429  384738 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 10:43:14.680382  384738 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000694296s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 10:43:14.680427  384738 out.go:285] * 
	W1213 10:43:14.680491  384738 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000694296s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:43:14.680504  384738 out.go:285] * 
	W1213 10:43:14.682627  384738 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:43:14.687629  384738 out.go:203] 
	W1213 10:43:14.691430  384738 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000694296s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 10:43:14.691497  384738 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 10:43:14.691607  384738 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 10:43:14.696504  384738 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 10:35:05 functional-407525 crio[845]: time="2025-12-13T10:35:05.732533912Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 10:35:05 functional-407525 crio[845]: time="2025-12-13T10:35:05.732697581Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 10:35:05 functional-407525 crio[845]: time="2025-12-13T10:35:05.732757127Z" level=info msg="Create NRI interface"
	Dec 13 10:35:05 functional-407525 crio[845]: time="2025-12-13T10:35:05.732855753Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 10:35:05 functional-407525 crio[845]: time="2025-12-13T10:35:05.73287099Z" level=info msg="runtime interface created"
	Dec 13 10:35:05 functional-407525 crio[845]: time="2025-12-13T10:35:05.7328837Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 10:35:05 functional-407525 crio[845]: time="2025-12-13T10:35:05.732889895Z" level=info msg="runtime interface starting up..."
	Dec 13 10:35:05 functional-407525 crio[845]: time="2025-12-13T10:35:05.732900488Z" level=info msg="starting plugins..."
	Dec 13 10:35:05 functional-407525 crio[845]: time="2025-12-13T10:35:05.732914141Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:35:05 functional-407525 crio[845]: time="2025-12-13T10:35:05.732986913Z" level=info msg="No systemd watchdog enabled"
	Dec 13 10:35:05 functional-407525 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 10:35:07 functional-407525 crio[845]: time="2025-12-13T10:35:07.943333072Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=10e88607-6939-4558-972b-62beb29cad0c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:07 functional-407525 crio[845]: time="2025-12-13T10:35:07.944045628Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=50ddcb33-89f8-47f7-a590-8fa28df06e47 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:07 functional-407525 crio[845]: time="2025-12-13T10:35:07.944508295Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=cced3733-4a4d-4e8e-aaea-47d4a12b9c3e name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:07 functional-407525 crio[845]: time="2025-12-13T10:35:07.944941702Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=7fe28cec-caf7-4be6-9af3-91797a85b704 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:07 functional-407525 crio[845]: time="2025-12-13T10:35:07.945409841Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=eafed8a3-abe8-42d0-a368-39290a0b2f81 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:07 functional-407525 crio[845]: time="2025-12-13T10:35:07.945923339Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=d7bd21b9-dce1-4890-9a30-9b4606cbc6a6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:35:07 functional-407525 crio[845]: time="2025-12-13T10:35:07.946350518Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=26c186ef-a37e-4722-a2db-91c5e25f99b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:39:12 functional-407525 crio[845]: time="2025-12-13T10:39:12.482155857Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=c2d92b0e-66e9-4ca1-b5b5-ecd964d4ad00 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:39:12 functional-407525 crio[845]: time="2025-12-13T10:39:12.482880795Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=f313f37b-f3d4-4769-a6c8-89762a209201 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:39:12 functional-407525 crio[845]: time="2025-12-13T10:39:12.483422068Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=134899c5-9d6d-40d0-970d-abe691b231ca name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:39:12 functional-407525 crio[845]: time="2025-12-13T10:39:12.483941285Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=6d53e823-d431-41ec-828f-ba42e1d0372c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:39:12 functional-407525 crio[845]: time="2025-12-13T10:39:12.484376112Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=4c9f02d3-998d-4600-a683-95c6edb739ac name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:39:12 functional-407525 crio[845]: time="2025-12-13T10:39:12.484857388Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=ee490ec9-e7a8-4a55-9d1e-09c75f3ff70f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:39:12 functional-407525 crio[845]: time="2025-12-13T10:39:12.485293839Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=86e1c721-b43f-4204-b5f1-c160ba13f7a3 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:43:15.663736    4965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:43:15.664122    4965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:43:15.669249    4965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:43:15.669767    4965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:43:15.671312    4965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	[Dec13 10:24] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:25] overlayfs: idmapped layers are currently not supported
	[  +0.081323] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:31] overlayfs: idmapped layers are currently not supported
	[Dec13 10:32] overlayfs: idmapped layers are currently not supported
	[Dec13 10:42] hrtimer: interrupt took 21684953 ns
	
	
	==> kernel <==
	 10:43:15 up  2:25,  0 user,  load average: 0.25, 0.47, 0.99
	Linux functional-407525 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:43:13 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:43:13 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 646.
	Dec 13 10:43:13 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:43:13 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:43:13 functional-407525 kubelet[4774]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:43:13 functional-407525 kubelet[4774]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:43:13 functional-407525 kubelet[4774]: E1213 10:43:13.824837    4774 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:43:13 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:43:13 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:43:14 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 647.
	Dec 13 10:43:14 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:43:14 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:43:14 functional-407525 kubelet[4838]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:43:14 functional-407525 kubelet[4838]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:43:14 functional-407525 kubelet[4838]: E1213 10:43:14.588117    4838 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:43:14 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:43:14 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:43:15 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 648.
	Dec 13 10:43:15 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:43:15 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:43:15 functional-407525 kubelet[4883]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:43:15 functional-407525 kubelet[4883]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:43:15 functional-407525 kubelet[4883]: E1213 10:43:15.326662    4883 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:43:15 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:43:15 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525: exit status 6 (348.252045ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 10:43:16.127382  390514 status.go:458] kubeconfig endpoint: get endpoint: "functional-407525" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-407525" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1213 10:43:16.142561  356328 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-407525 --alsologtostderr -v=8
E1213 10:44:06.640303  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:44:34.349558  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:47:27.930765  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:48:50.999084  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:49:06.640348  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-407525 --alsologtostderr -v=8: exit status 80 (6m5.654387676s)

                                                
                                                
-- stdout --
	* [functional-407525] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-407525" primary control-plane node in "functional-407525" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:43:16.189245  390588 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:43:16.189385  390588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:43:16.189397  390588 out.go:374] Setting ErrFile to fd 2...
	I1213 10:43:16.189403  390588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:43:16.189684  390588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:43:16.190095  390588 out.go:368] Setting JSON to false
	I1213 10:43:16.190986  390588 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8749,"bootTime":1765613848,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:43:16.191060  390588 start.go:143] virtualization:  
	I1213 10:43:16.194511  390588 out.go:179] * [functional-407525] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:43:16.198204  390588 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:43:16.198321  390588 notify.go:221] Checking for updates...
	I1213 10:43:16.204163  390588 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:43:16.207088  390588 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:16.209934  390588 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 10:43:16.212863  390588 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:43:16.215711  390588 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:43:16.219166  390588 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:43:16.219330  390588 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:43:16.245531  390588 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:43:16.245660  390588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:43:16.304777  390588 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:43:16.295770012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:43:16.304888  390588 docker.go:319] overlay module found
	I1213 10:43:16.309644  390588 out.go:179] * Using the docker driver based on existing profile
	I1213 10:43:16.312430  390588 start.go:309] selected driver: docker
	I1213 10:43:16.312447  390588 start.go:927] validating driver "docker" against &{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:43:16.312556  390588 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:43:16.312654  390588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:43:16.369591  390588 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:43:16.360947105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:43:16.370024  390588 cni.go:84] Creating CNI manager for ""
	I1213 10:43:16.370077  390588 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:43:16.370130  390588 start.go:353] cluster config:
	{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:43:16.374951  390588 out.go:179] * Starting "functional-407525" primary control-plane node in "functional-407525" cluster
	I1213 10:43:16.377750  390588 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:43:16.380575  390588 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:43:16.383625  390588 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:43:16.383675  390588 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:43:16.383684  390588 cache.go:65] Caching tarball of preloaded images
	I1213 10:43:16.383721  390588 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:43:16.383768  390588 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:43:16.383779  390588 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 10:43:16.383909  390588 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/config.json ...
	I1213 10:43:16.402414  390588 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:43:16.402437  390588 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:43:16.402458  390588 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:43:16.402490  390588 start.go:360] acquireMachinesLock for functional-407525: {Name:mkb9a6ddeb0e93e626919e03dc3c989f045e07da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:43:16.402563  390588 start.go:364] duration metric: took 38.359µs to acquireMachinesLock for "functional-407525"
	I1213 10:43:16.402589  390588 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:43:16.402599  390588 fix.go:54] fixHost starting: 
	I1213 10:43:16.402860  390588 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:43:16.419664  390588 fix.go:112] recreateIfNeeded on functional-407525: state=Running err=<nil>
	W1213 10:43:16.419692  390588 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:43:16.423019  390588 out.go:252] * Updating the running docker "functional-407525" container ...
	I1213 10:43:16.423065  390588 machine.go:94] provisionDockerMachine start ...
	I1213 10:43:16.423166  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:16.440791  390588 main.go:143] libmachine: Using SSH client type: native
	I1213 10:43:16.441132  390588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:43:16.441147  390588 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:43:16.590928  390588 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-407525
	
	I1213 10:43:16.590952  390588 ubuntu.go:182] provisioning hostname "functional-407525"
	I1213 10:43:16.591012  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:16.608907  390588 main.go:143] libmachine: Using SSH client type: native
	I1213 10:43:16.609223  390588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:43:16.609243  390588 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-407525 && echo "functional-407525" | sudo tee /etc/hostname
	I1213 10:43:16.770512  390588 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-407525
	
	I1213 10:43:16.770629  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:16.791074  390588 main.go:143] libmachine: Using SSH client type: native
	I1213 10:43:16.791392  390588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:43:16.791418  390588 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-407525' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-407525/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-407525' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:43:16.939938  390588 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:43:16.939965  390588 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 10:43:16.940042  390588 ubuntu.go:190] setting up certificates
	I1213 10:43:16.940060  390588 provision.go:84] configureAuth start
	I1213 10:43:16.940146  390588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-407525
	I1213 10:43:16.959231  390588 provision.go:143] copyHostCerts
	I1213 10:43:16.959277  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 10:43:16.959321  390588 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 10:43:16.959334  390588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 10:43:16.959423  390588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 10:43:16.959550  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 10:43:16.959579  390588 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 10:43:16.959590  390588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 10:43:16.959624  390588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 10:43:16.959682  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 10:43:16.959708  390588 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 10:43:16.959712  390588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 10:43:16.959738  390588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 10:43:16.959842  390588 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.functional-407525 san=[127.0.0.1 192.168.49.2 functional-407525 localhost minikube]
	I1213 10:43:17.067458  390588 provision.go:177] copyRemoteCerts
	I1213 10:43:17.067620  390588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:43:17.067673  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.087609  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:17.191151  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 10:43:17.191266  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:43:17.208031  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 10:43:17.208139  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:43:17.224829  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 10:43:17.224888  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:43:17.242075  390588 provision.go:87] duration metric: took 301.967659ms to configureAuth
	I1213 10:43:17.242106  390588 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:43:17.242287  390588 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:43:17.242396  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.259726  390588 main.go:143] libmachine: Using SSH client type: native
	I1213 10:43:17.260059  390588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:43:17.260089  390588 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:43:17.589136  390588 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:43:17.589164  390588 machine.go:97] duration metric: took 1.166089785s to provisionDockerMachine
	I1213 10:43:17.589176  390588 start.go:293] postStartSetup for "functional-407525" (driver="docker")
	I1213 10:43:17.589189  390588 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:43:17.589251  390588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:43:17.589299  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.609214  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:17.715839  390588 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:43:17.719089  390588 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 10:43:17.719109  390588 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 10:43:17.719114  390588 command_runner.go:130] > VERSION_ID="12"
	I1213 10:43:17.719118  390588 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 10:43:17.719124  390588 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 10:43:17.719128  390588 command_runner.go:130] > ID=debian
	I1213 10:43:17.719139  390588 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 10:43:17.719147  390588 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 10:43:17.719152  390588 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 10:43:17.719195  390588 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:43:17.719216  390588 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:43:17.719233  390588 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 10:43:17.719286  390588 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 10:43:17.719370  390588 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 10:43:17.719381  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> /etc/ssl/certs/3563282.pem
	I1213 10:43:17.719455  390588 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts -> hosts in /etc/test/nested/copy/356328
	I1213 10:43:17.719463  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts -> /etc/test/nested/copy/356328/hosts
	I1213 10:43:17.719505  390588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/356328
	I1213 10:43:17.727090  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 10:43:17.744131  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts --> /etc/test/nested/copy/356328/hosts (40 bytes)
	I1213 10:43:17.760861  390588 start.go:296] duration metric: took 171.654498ms for postStartSetup
	I1213 10:43:17.760950  390588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:43:17.760996  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.777913  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:17.880295  390588 command_runner.go:130] > 14%
	I1213 10:43:17.880360  390588 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:43:17.884436  390588 command_runner.go:130] > 169G
	I1213 10:43:17.884867  390588 fix.go:56] duration metric: took 1.482264041s for fixHost
	I1213 10:43:17.884887  390588 start.go:83] releasing machines lock for "functional-407525", held for 1.482310261s
	I1213 10:43:17.884953  390588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-407525
	I1213 10:43:17.902293  390588 ssh_runner.go:195] Run: cat /version.json
	I1213 10:43:17.902324  390588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:43:17.902343  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.902383  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.922251  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:17.922884  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:18.027684  390588 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 10:43:18.027820  390588 ssh_runner.go:195] Run: systemctl --version
	I1213 10:43:18.121469  390588 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 10:43:18.124198  390588 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 10:43:18.124239  390588 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 10:43:18.124329  390588 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:43:18.162710  390588 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 10:43:18.167030  390588 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 10:43:18.167242  390588 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:43:18.167335  390588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:43:18.175207  390588 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:43:18.175230  390588 start.go:496] detecting cgroup driver to use...
	I1213 10:43:18.175264  390588 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:43:18.175320  390588 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:43:18.190633  390588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:43:18.203672  390588 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:43:18.203747  390588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:43:18.219163  390588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:43:18.232309  390588 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:43:18.357889  390588 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:43:18.493929  390588 docker.go:234] disabling docker service ...
	I1213 10:43:18.494052  390588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:43:18.509796  390588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:43:18.523416  390588 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:43:18.655317  390588 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:43:18.778247  390588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:43:18.791182  390588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:43:18.805083  390588 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1213 10:43:18.806588  390588 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:43:18.806679  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.815701  390588 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:43:18.815803  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.824913  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.834321  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.843170  390588 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:43:18.851373  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.860701  390588 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.869075  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.877860  390588 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:43:18.884514  390588 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 10:43:18.885462  390588 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:43:18.893210  390588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:43:19.009167  390588 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:43:19.185094  390588 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:43:19.185195  390588 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:43:19.189492  390588 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1213 10:43:19.189518  390588 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 10:43:19.189526  390588 command_runner.go:130] > Device: 0,72	Inode: 1638        Links: 1
	I1213 10:43:19.189541  390588 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:43:19.189566  390588 command_runner.go:130] > Access: 2025-12-13 10:43:19.120971949 +0000
	I1213 10:43:19.189581  390588 command_runner.go:130] > Modify: 2025-12-13 10:43:19.120971949 +0000
	I1213 10:43:19.189586  390588 command_runner.go:130] > Change: 2025-12-13 10:43:19.120971949 +0000
	I1213 10:43:19.189590  390588 command_runner.go:130] >  Birth: -
	I1213 10:43:19.190244  390588 start.go:564] Will wait 60s for crictl version
	I1213 10:43:19.190335  390588 ssh_runner.go:195] Run: which crictl
	I1213 10:43:19.193561  390588 command_runner.go:130] > /usr/local/bin/crictl
	I1213 10:43:19.194286  390588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:43:19.222711  390588 command_runner.go:130] > Version:  0.1.0
	I1213 10:43:19.222747  390588 command_runner.go:130] > RuntimeName:  cri-o
	I1213 10:43:19.222752  390588 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1213 10:43:19.222773  390588 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 10:43:19.225058  390588 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:43:19.225194  390588 ssh_runner.go:195] Run: crio --version
	I1213 10:43:19.255970  390588 command_runner.go:130] > crio version 1.34.3
	I1213 10:43:19.256013  390588 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 10:43:19.256019  390588 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 10:43:19.256025  390588 command_runner.go:130] >    GitTreeState:   dirty
	I1213 10:43:19.256044  390588 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 10:43:19.256051  390588 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 10:43:19.256078  390588 command_runner.go:130] >    Compiler:       gc
	I1213 10:43:19.256090  390588 command_runner.go:130] >    Platform:       linux/arm64
	I1213 10:43:19.256094  390588 command_runner.go:130] >    Linkmode:       static
	I1213 10:43:19.256098  390588 command_runner.go:130] >    BuildTags:
	I1213 10:43:19.256105  390588 command_runner.go:130] >      static
	I1213 10:43:19.256109  390588 command_runner.go:130] >      netgo
	I1213 10:43:19.256113  390588 command_runner.go:130] >      osusergo
	I1213 10:43:19.256117  390588 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 10:43:19.256123  390588 command_runner.go:130] >      seccomp
	I1213 10:43:19.256128  390588 command_runner.go:130] >      apparmor
	I1213 10:43:19.256131  390588 command_runner.go:130] >      selinux
	I1213 10:43:19.256136  390588 command_runner.go:130] >    LDFlags:          unknown
	I1213 10:43:19.256166  390588 command_runner.go:130] >    SeccompEnabled:   true
	I1213 10:43:19.256195  390588 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 10:43:19.258161  390588 ssh_runner.go:195] Run: crio --version
	I1213 10:43:19.285922  390588 command_runner.go:130] > crio version 1.34.3
	I1213 10:43:19.285950  390588 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 10:43:19.285964  390588 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 10:43:19.285970  390588 command_runner.go:130] >    GitTreeState:   dirty
	I1213 10:43:19.285975  390588 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 10:43:19.285999  390588 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 10:43:19.286010  390588 command_runner.go:130] >    Compiler:       gc
	I1213 10:43:19.286017  390588 command_runner.go:130] >    Platform:       linux/arm64
	I1213 10:43:19.286022  390588 command_runner.go:130] >    Linkmode:       static
	I1213 10:43:19.286028  390588 command_runner.go:130] >    BuildTags:
	I1213 10:43:19.286046  390588 command_runner.go:130] >      static
	I1213 10:43:19.286056  390588 command_runner.go:130] >      netgo
	I1213 10:43:19.286061  390588 command_runner.go:130] >      osusergo
	I1213 10:43:19.286075  390588 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 10:43:19.286093  390588 command_runner.go:130] >      seccomp
	I1213 10:43:19.286102  390588 command_runner.go:130] >      apparmor
	I1213 10:43:19.286108  390588 command_runner.go:130] >      selinux
	I1213 10:43:19.286132  390588 command_runner.go:130] >    LDFlags:          unknown
	I1213 10:43:19.286137  390588 command_runner.go:130] >    SeccompEnabled:   true
	I1213 10:43:19.286153  390588 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 10:43:19.291101  390588 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 10:43:19.293929  390588 cli_runner.go:164] Run: docker network inspect functional-407525 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:43:19.310541  390588 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:43:19.314437  390588 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 10:43:19.314776  390588 kubeadm.go:884] updating cluster {Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:43:19.314904  390588 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:43:19.314962  390588 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:43:19.346332  390588 command_runner.go:130] > {
	I1213 10:43:19.346357  390588 command_runner.go:130] >   "images":  [
	I1213 10:43:19.346361  390588 command_runner.go:130] >     {
	I1213 10:43:19.346369  390588 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:43:19.346374  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346380  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:43:19.346383  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346387  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346396  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 10:43:19.346404  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 10:43:19.346411  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346416  390588 command_runner.go:130] >       "size":  "111333938",
	I1213 10:43:19.346423  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346429  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346436  390588 command_runner.go:130] >     },
	I1213 10:43:19.346439  390588 command_runner.go:130] >     {
	I1213 10:43:19.346445  390588 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:43:19.346449  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346457  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:43:19.346467  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346472  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346480  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 10:43:19.346491  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:43:19.346494  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346508  390588 command_runner.go:130] >       "size":  "29037500",
	I1213 10:43:19.346518  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346525  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346531  390588 command_runner.go:130] >     },
	I1213 10:43:19.346535  390588 command_runner.go:130] >     {
	I1213 10:43:19.346541  390588 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:43:19.346548  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346553  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:43:19.346556  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346563  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346571  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 10:43:19.346582  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 10:43:19.346586  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346590  390588 command_runner.go:130] >       "size":  "74491780",
	I1213 10:43:19.346594  390588 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:43:19.346600  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346604  390588 command_runner.go:130] >     },
	I1213 10:43:19.346610  390588 command_runner.go:130] >     {
	I1213 10:43:19.346616  390588 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:43:19.346621  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346628  390588 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:43:19.346632  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346636  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346646  390588 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 10:43:19.346657  390588 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 10:43:19.346661  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346667  390588 command_runner.go:130] >       "size":  "60857170",
	I1213 10:43:19.346671  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.346675  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.346679  390588 command_runner.go:130] >       },
	I1213 10:43:19.346690  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346698  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346702  390588 command_runner.go:130] >     },
	I1213 10:43:19.346705  390588 command_runner.go:130] >     {
	I1213 10:43:19.346715  390588 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:43:19.346722  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346728  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:43:19.346731  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346736  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346745  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 10:43:19.346760  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 10:43:19.346764  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346768  390588 command_runner.go:130] >       "size":  "84949999",
	I1213 10:43:19.346775  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.346778  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.346782  390588 command_runner.go:130] >       },
	I1213 10:43:19.346786  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346796  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346799  390588 command_runner.go:130] >     },
	I1213 10:43:19.346802  390588 command_runner.go:130] >     {
	I1213 10:43:19.346811  390588 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:43:19.346818  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346824  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:43:19.346828  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346832  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346842  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 10:43:19.346851  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 10:43:19.346859  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346863  390588 command_runner.go:130] >       "size":  "72170325",
	I1213 10:43:19.346866  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.346870  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.346875  390588 command_runner.go:130] >       },
	I1213 10:43:19.346879  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346886  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346889  390588 command_runner.go:130] >     },
	I1213 10:43:19.346892  390588 command_runner.go:130] >     {
	I1213 10:43:19.346898  390588 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:43:19.346911  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346917  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:43:19.346923  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346927  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346934  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 10:43:19.346946  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:43:19.346950  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346954  390588 command_runner.go:130] >       "size":  "74106775",
	I1213 10:43:19.346958  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346964  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346967  390588 command_runner.go:130] >     },
	I1213 10:43:19.346970  390588 command_runner.go:130] >     {
	I1213 10:43:19.346977  390588 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:43:19.346984  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346990  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:43:19.346993  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346997  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.347007  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 10:43:19.347027  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 10:43:19.347034  390588 command_runner.go:130] >       ],
	I1213 10:43:19.347038  390588 command_runner.go:130] >       "size":  "49822549",
	I1213 10:43:19.347041  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.347045  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.347048  390588 command_runner.go:130] >       },
	I1213 10:43:19.347053  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.347058  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.347062  390588 command_runner.go:130] >     },
	I1213 10:43:19.347065  390588 command_runner.go:130] >     {
	I1213 10:43:19.347072  390588 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:43:19.347078  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.347083  390588 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:43:19.347087  390588 command_runner.go:130] >       ],
	I1213 10:43:19.347097  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.347109  390588 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 10:43:19.347120  390588 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 10:43:19.347124  390588 command_runner.go:130] >       ],
	I1213 10:43:19.347132  390588 command_runner.go:130] >       "size":  "519884",
	I1213 10:43:19.347135  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.347140  390588 command_runner.go:130] >         "value":  "65535"
	I1213 10:43:19.347145  390588 command_runner.go:130] >       },
	I1213 10:43:19.347149  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.347155  390588 command_runner.go:130] >       "pinned":  true
	I1213 10:43:19.347158  390588 command_runner.go:130] >     }
	I1213 10:43:19.347161  390588 command_runner.go:130] >   ]
	I1213 10:43:19.347164  390588 command_runner.go:130] > }
	I1213 10:43:19.347379  390588 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:43:19.347391  390588 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:43:19.347452  390588 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:43:19.372755  390588 command_runner.go:130] > {
	I1213 10:43:19.372774  390588 command_runner.go:130] >   "images":  [
	I1213 10:43:19.372779  390588 command_runner.go:130] >     {
	I1213 10:43:19.372788  390588 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:43:19.372792  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.372799  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:43:19.372803  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372807  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.372816  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 10:43:19.372824  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 10:43:19.372828  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372832  390588 command_runner.go:130] >       "size":  "111333938",
	I1213 10:43:19.372836  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.372851  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.372854  390588 command_runner.go:130] >     },
	I1213 10:43:19.372857  390588 command_runner.go:130] >     {
	I1213 10:43:19.372863  390588 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:43:19.372868  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.372873  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:43:19.372876  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372880  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.372889  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 10:43:19.372897  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:43:19.372900  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372904  390588 command_runner.go:130] >       "size":  "29037500",
	I1213 10:43:19.372908  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.372920  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.372924  390588 command_runner.go:130] >     },
	I1213 10:43:19.372927  390588 command_runner.go:130] >     {
	I1213 10:43:19.372934  390588 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:43:19.372938  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.372943  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:43:19.372947  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372950  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.372958  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 10:43:19.372966  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 10:43:19.372970  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372973  390588 command_runner.go:130] >       "size":  "74491780",
	I1213 10:43:19.372978  390588 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:43:19.372982  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.372985  390588 command_runner.go:130] >     },
	I1213 10:43:19.372988  390588 command_runner.go:130] >     {
	I1213 10:43:19.372994  390588 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:43:19.372998  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373002  390588 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:43:19.373007  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373011  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373018  390588 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 10:43:19.373025  390588 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 10:43:19.373029  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373033  390588 command_runner.go:130] >       "size":  "60857170",
	I1213 10:43:19.373036  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373040  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.373043  390588 command_runner.go:130] >       },
	I1213 10:43:19.373052  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373056  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373059  390588 command_runner.go:130] >     },
	I1213 10:43:19.373062  390588 command_runner.go:130] >     {
	I1213 10:43:19.373070  390588 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:43:19.373078  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373083  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:43:19.373087  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373090  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373098  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 10:43:19.373110  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 10:43:19.373114  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373118  390588 command_runner.go:130] >       "size":  "84949999",
	I1213 10:43:19.373122  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373126  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.373129  390588 command_runner.go:130] >       },
	I1213 10:43:19.373132  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373136  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373139  390588 command_runner.go:130] >     },
	I1213 10:43:19.373142  390588 command_runner.go:130] >     {
	I1213 10:43:19.373148  390588 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:43:19.373151  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373157  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:43:19.373161  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373164  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373172  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 10:43:19.373181  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 10:43:19.373184  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373188  390588 command_runner.go:130] >       "size":  "72170325",
	I1213 10:43:19.373191  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373195  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.373198  390588 command_runner.go:130] >       },
	I1213 10:43:19.373202  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373206  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373208  390588 command_runner.go:130] >     },
	I1213 10:43:19.373211  390588 command_runner.go:130] >     {
	I1213 10:43:19.373218  390588 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:43:19.373222  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373230  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:43:19.373234  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373238  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373246  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 10:43:19.373253  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:43:19.373256  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373260  390588 command_runner.go:130] >       "size":  "74106775",
	I1213 10:43:19.373263  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373267  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373270  390588 command_runner.go:130] >     },
	I1213 10:43:19.373273  390588 command_runner.go:130] >     {
	I1213 10:43:19.373279  390588 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:43:19.373283  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373288  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:43:19.373291  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373295  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373303  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 10:43:19.373321  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 10:43:19.373324  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373328  390588 command_runner.go:130] >       "size":  "49822549",
	I1213 10:43:19.373331  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373336  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.373339  390588 command_runner.go:130] >       },
	I1213 10:43:19.373343  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373346  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373349  390588 command_runner.go:130] >     },
	I1213 10:43:19.373352  390588 command_runner.go:130] >     {
	I1213 10:43:19.373359  390588 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:43:19.373362  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373367  390588 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:43:19.373372  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373376  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373383  390588 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 10:43:19.373394  390588 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 10:43:19.373398  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373402  390588 command_runner.go:130] >       "size":  "519884",
	I1213 10:43:19.373405  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373409  390588 command_runner.go:130] >         "value":  "65535"
	I1213 10:43:19.373412  390588 command_runner.go:130] >       },
	I1213 10:43:19.373419  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373422  390588 command_runner.go:130] >       "pinned":  true
	I1213 10:43:19.373426  390588 command_runner.go:130] >     }
	I1213 10:43:19.373428  390588 command_runner.go:130] >   ]
	I1213 10:43:19.373432  390588 command_runner.go:130] > }
	I1213 10:43:19.375861  390588 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:43:19.375885  390588 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:43:19.375894  390588 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 10:43:19.375988  390588 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-407525 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:43:19.376071  390588 ssh_runner.go:195] Run: crio config
	I1213 10:43:19.425743  390588 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1213 10:43:19.425768  390588 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1213 10:43:19.425775  390588 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1213 10:43:19.425779  390588 command_runner.go:130] > #
	I1213 10:43:19.425787  390588 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1213 10:43:19.425793  390588 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1213 10:43:19.425801  390588 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1213 10:43:19.425810  390588 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1213 10:43:19.425814  390588 command_runner.go:130] > # reload'.
	I1213 10:43:19.425821  390588 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1213 10:43:19.425828  390588 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1213 10:43:19.425838  390588 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1213 10:43:19.425844  390588 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1213 10:43:19.425847  390588 command_runner.go:130] > [crio]
	I1213 10:43:19.425854  390588 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1213 10:43:19.425862  390588 command_runner.go:130] > # containers images, in this directory.
	I1213 10:43:19.426591  390588 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1213 10:43:19.426608  390588 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1213 10:43:19.427294  390588 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1213 10:43:19.427313  390588 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1213 10:43:19.427819  390588 command_runner.go:130] > # imagestore = ""
	I1213 10:43:19.427842  390588 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1213 10:43:19.427850  390588 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1213 10:43:19.428482  390588 command_runner.go:130] > # storage_driver = "overlay"
	I1213 10:43:19.428503  390588 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1213 10:43:19.428511  390588 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1213 10:43:19.428824  390588 command_runner.go:130] > # storage_option = [
	I1213 10:43:19.429159  390588 command_runner.go:130] > # ]
	I1213 10:43:19.429181  390588 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1213 10:43:19.429189  390588 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1213 10:43:19.429811  390588 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1213 10:43:19.429832  390588 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1213 10:43:19.429847  390588 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1213 10:43:19.429857  390588 command_runner.go:130] > # always happen on a node reboot
	I1213 10:43:19.430483  390588 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1213 10:43:19.430528  390588 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1213 10:43:19.430541  390588 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1213 10:43:19.430547  390588 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1213 10:43:19.431051  390588 command_runner.go:130] > # version_file_persist = ""
	I1213 10:43:19.431076  390588 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1213 10:43:19.431086  390588 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1213 10:43:19.431716  390588 command_runner.go:130] > # internal_wipe = true
	I1213 10:43:19.431739  390588 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1213 10:43:19.431747  390588 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1213 10:43:19.432440  390588 command_runner.go:130] > # internal_repair = true
	I1213 10:43:19.432456  390588 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1213 10:43:19.432463  390588 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1213 10:43:19.432469  390588 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1213 10:43:19.432478  390588 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1213 10:43:19.432487  390588 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1213 10:43:19.432491  390588 command_runner.go:130] > [crio.api]
	I1213 10:43:19.432496  390588 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1213 10:43:19.432503  390588 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1213 10:43:19.432512  390588 command_runner.go:130] > # IP address on which the stream server will listen.
	I1213 10:43:19.432517  390588 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1213 10:43:19.432544  390588 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1213 10:43:19.432552  390588 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1213 10:43:19.432851  390588 command_runner.go:130] > # stream_port = "0"
	I1213 10:43:19.432867  390588 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1213 10:43:19.432873  390588 command_runner.go:130] > # stream_enable_tls = false
	I1213 10:43:19.432879  390588 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1213 10:43:19.432886  390588 command_runner.go:130] > # stream_idle_timeout = ""
	I1213 10:43:19.432897  390588 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1213 10:43:19.432906  390588 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1213 10:43:19.433090  390588 command_runner.go:130] > # stream_tls_cert = ""
	I1213 10:43:19.433111  390588 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1213 10:43:19.433117  390588 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1213 10:43:19.433335  390588 command_runner.go:130] > # stream_tls_key = ""
	I1213 10:43:19.433354  390588 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1213 10:43:19.433362  390588 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1213 10:43:19.433373  390588 command_runner.go:130] > # automatically pick up the changes.
	I1213 10:43:19.433389  390588 command_runner.go:130] > # stream_tls_ca = ""
	I1213 10:43:19.433408  390588 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 10:43:19.433419  390588 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1213 10:43:19.433428  390588 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 10:43:19.433678  390588 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1213 10:43:19.433694  390588 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1213 10:43:19.433701  390588 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1213 10:43:19.433705  390588 command_runner.go:130] > [crio.runtime]
	I1213 10:43:19.433711  390588 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1213 10:43:19.433719  390588 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1213 10:43:19.433726  390588 command_runner.go:130] > # "nofile=1024:2048"
	I1213 10:43:19.433733  390588 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1213 10:43:19.433737  390588 command_runner.go:130] > # default_ulimits = [
	I1213 10:43:19.433744  390588 command_runner.go:130] > # ]
	I1213 10:43:19.433751  390588 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1213 10:43:19.433758  390588 command_runner.go:130] > # no_pivot = false
	I1213 10:43:19.433764  390588 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1213 10:43:19.433771  390588 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1213 10:43:19.433778  390588 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1213 10:43:19.433785  390588 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1213 10:43:19.433790  390588 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1213 10:43:19.433797  390588 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 10:43:19.433949  390588 command_runner.go:130] > # conmon = ""
	I1213 10:43:19.433968  390588 command_runner.go:130] > # Cgroup setting for conmon
	I1213 10:43:19.433978  390588 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1213 10:43:19.434402  390588 command_runner.go:130] > conmon_cgroup = "pod"
	I1213 10:43:19.434425  390588 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1213 10:43:19.434435  390588 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1213 10:43:19.434446  390588 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 10:43:19.434453  390588 command_runner.go:130] > # conmon_env = [
	I1213 10:43:19.434466  390588 command_runner.go:130] > # ]
	I1213 10:43:19.434472  390588 command_runner.go:130] > # Additional environment variables to set for all the
	I1213 10:43:19.434478  390588 command_runner.go:130] > # containers. These are overridden if set in the
	I1213 10:43:19.434484  390588 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1213 10:43:19.434488  390588 command_runner.go:130] > # default_env = [
	I1213 10:43:19.434491  390588 command_runner.go:130] > # ]
	I1213 10:43:19.434497  390588 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1213 10:43:19.434515  390588 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1213 10:43:19.434525  390588 command_runner.go:130] > # selinux = false
	I1213 10:43:19.434535  390588 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1213 10:43:19.434543  390588 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1213 10:43:19.434555  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.434559  390588 command_runner.go:130] > # seccomp_profile = ""
	I1213 10:43:19.434565  390588 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1213 10:43:19.434570  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.434841  390588 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1213 10:43:19.434858  390588 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1213 10:43:19.434865  390588 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1213 10:43:19.434872  390588 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1213 10:43:19.434885  390588 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1213 10:43:19.434891  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.434896  390588 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1213 10:43:19.434902  390588 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1213 10:43:19.434908  390588 command_runner.go:130] > # the cgroup blockio controller.
	I1213 10:43:19.434913  390588 command_runner.go:130] > # blockio_config_file = ""
	I1213 10:43:19.434937  390588 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1213 10:43:19.434946  390588 command_runner.go:130] > # blockio parameters.
	I1213 10:43:19.434950  390588 command_runner.go:130] > # blockio_reload = false
	I1213 10:43:19.434957  390588 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1213 10:43:19.434961  390588 command_runner.go:130] > # irqbalance daemon.
	I1213 10:43:19.434966  390588 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1213 10:43:19.434972  390588 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1213 10:43:19.434982  390588 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1213 10:43:19.434992  390588 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1213 10:43:19.435365  390588 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1213 10:43:19.435381  390588 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1213 10:43:19.435387  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.435392  390588 command_runner.go:130] > # rdt_config_file = ""
	I1213 10:43:19.435398  390588 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1213 10:43:19.435404  390588 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1213 10:43:19.435411  390588 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1213 10:43:19.435584  390588 command_runner.go:130] > # separate_pull_cgroup = ""
	I1213 10:43:19.435601  390588 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1213 10:43:19.435608  390588 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1213 10:43:19.435617  390588 command_runner.go:130] > # will be added.
	I1213 10:43:19.436649  390588 command_runner.go:130] > # default_capabilities = [
	I1213 10:43:19.436661  390588 command_runner.go:130] > # 	"CHOWN",
	I1213 10:43:19.436665  390588 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1213 10:43:19.436669  390588 command_runner.go:130] > # 	"FSETID",
	I1213 10:43:19.436673  390588 command_runner.go:130] > # 	"FOWNER",
	I1213 10:43:19.436679  390588 command_runner.go:130] > # 	"SETGID",
	I1213 10:43:19.436683  390588 command_runner.go:130] > # 	"SETUID",
	I1213 10:43:19.436708  390588 command_runner.go:130] > # 	"SETPCAP",
	I1213 10:43:19.436718  390588 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1213 10:43:19.436722  390588 command_runner.go:130] > # 	"KILL",
	I1213 10:43:19.436725  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436737  390588 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1213 10:43:19.436744  390588 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1213 10:43:19.436749  390588 command_runner.go:130] > # add_inheritable_capabilities = false
	I1213 10:43:19.436759  390588 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1213 10:43:19.436773  390588 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 10:43:19.436777  390588 command_runner.go:130] > default_sysctls = [
	I1213 10:43:19.436788  390588 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1213 10:43:19.436794  390588 command_runner.go:130] > ]
	I1213 10:43:19.436799  390588 command_runner.go:130] > # List of devices on the host that a
	I1213 10:43:19.436806  390588 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1213 10:43:19.436813  390588 command_runner.go:130] > # allowed_devices = [
	I1213 10:43:19.436817  390588 command_runner.go:130] > # 	"/dev/fuse",
	I1213 10:43:19.436820  390588 command_runner.go:130] > # 	"/dev/net/tun",
	I1213 10:43:19.436823  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436828  390588 command_runner.go:130] > # List of additional devices. specified as
	I1213 10:43:19.436836  390588 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1213 10:43:19.436842  390588 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1213 10:43:19.436850  390588 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 10:43:19.436857  390588 command_runner.go:130] > # additional_devices = [
	I1213 10:43:19.436861  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436868  390588 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1213 10:43:19.436872  390588 command_runner.go:130] > # cdi_spec_dirs = [
	I1213 10:43:19.436878  390588 command_runner.go:130] > # 	"/etc/cdi",
	I1213 10:43:19.436882  390588 command_runner.go:130] > # 	"/var/run/cdi",
	I1213 10:43:19.436888  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436895  390588 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1213 10:43:19.436904  390588 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1213 10:43:19.436908  390588 command_runner.go:130] > # Defaults to false.
	I1213 10:43:19.436913  390588 command_runner.go:130] > # device_ownership_from_security_context = false
	I1213 10:43:19.436919  390588 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1213 10:43:19.436926  390588 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1213 10:43:19.436930  390588 command_runner.go:130] > # hooks_dir = [
	I1213 10:43:19.436936  390588 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1213 10:43:19.436942  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436948  390588 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1213 10:43:19.436964  390588 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1213 10:43:19.436969  390588 command_runner.go:130] > # its default mounts from the following two files:
	I1213 10:43:19.436973  390588 command_runner.go:130] > #
	I1213 10:43:19.436981  390588 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1213 10:43:19.436992  390588 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1213 10:43:19.437001  390588 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1213 10:43:19.437008  390588 command_runner.go:130] > #
	I1213 10:43:19.437022  390588 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1213 10:43:19.437029  390588 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1213 10:43:19.437035  390588 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1213 10:43:19.437044  390588 command_runner.go:130] > #      only add mounts it finds in this file.
	I1213 10:43:19.437047  390588 command_runner.go:130] > #
	I1213 10:43:19.437051  390588 command_runner.go:130] > # default_mounts_file = ""
	I1213 10:43:19.437059  390588 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1213 10:43:19.437068  390588 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1213 10:43:19.437072  390588 command_runner.go:130] > # pids_limit = -1
	I1213 10:43:19.437078  390588 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1213 10:43:19.437087  390588 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1213 10:43:19.437094  390588 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1213 10:43:19.437104  390588 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1213 10:43:19.437110  390588 command_runner.go:130] > # log_size_max = -1
	I1213 10:43:19.437117  390588 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1213 10:43:19.437124  390588 command_runner.go:130] > # log_to_journald = false
	I1213 10:43:19.437130  390588 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1213 10:43:19.437136  390588 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1213 10:43:19.437143  390588 command_runner.go:130] > # Path to directory for container attach sockets.
	I1213 10:43:19.437149  390588 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1213 10:43:19.437160  390588 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1213 10:43:19.437164  390588 command_runner.go:130] > # bind_mount_prefix = ""
	I1213 10:43:19.437170  390588 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1213 10:43:19.437174  390588 command_runner.go:130] > # read_only = false
	I1213 10:43:19.437180  390588 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1213 10:43:19.437188  390588 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1213 10:43:19.437195  390588 command_runner.go:130] > # live configuration reload.
	I1213 10:43:19.437199  390588 command_runner.go:130] > # log_level = "info"
	I1213 10:43:19.437216  390588 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1213 10:43:19.437221  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.437232  390588 command_runner.go:130] > # log_filter = ""
	I1213 10:43:19.437241  390588 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1213 10:43:19.437248  390588 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1213 10:43:19.437252  390588 command_runner.go:130] > # separated by comma.
	I1213 10:43:19.437260  390588 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:43:19.437264  390588 command_runner.go:130] > # uid_mappings = ""
	I1213 10:43:19.437270  390588 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1213 10:43:19.437280  390588 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1213 10:43:19.437285  390588 command_runner.go:130] > # separated by comma.
	I1213 10:43:19.437295  390588 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:43:19.437301  390588 command_runner.go:130] > # gid_mappings = ""
	I1213 10:43:19.437308  390588 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1213 10:43:19.437314  390588 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 10:43:19.437320  390588 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 10:43:19.437331  390588 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:43:19.437335  390588 command_runner.go:130] > # minimum_mappable_uid = -1
	I1213 10:43:19.437345  390588 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1213 10:43:19.437354  390588 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 10:43:19.437361  390588 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 10:43:19.437371  390588 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:43:19.437375  390588 command_runner.go:130] > # minimum_mappable_gid = -1
	I1213 10:43:19.437382  390588 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1213 10:43:19.437390  390588 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1213 10:43:19.437396  390588 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1213 10:43:19.437403  390588 command_runner.go:130] > # ctr_stop_timeout = 30
	I1213 10:43:19.437409  390588 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1213 10:43:19.437416  390588 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1213 10:43:19.437423  390588 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1213 10:43:19.437428  390588 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1213 10:43:19.437432  390588 command_runner.go:130] > # drop_infra_ctr = true
	I1213 10:43:19.437441  390588 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1213 10:43:19.437449  390588 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1213 10:43:19.437457  390588 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1213 10:43:19.437473  390588 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1213 10:43:19.437482  390588 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1213 10:43:19.437491  390588 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1213 10:43:19.437497  390588 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1213 10:43:19.437502  390588 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1213 10:43:19.437506  390588 command_runner.go:130] > # shared_cpuset = ""
	I1213 10:43:19.437511  390588 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1213 10:43:19.437519  390588 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1213 10:43:19.437524  390588 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1213 10:43:19.437534  390588 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1213 10:43:19.437546  390588 command_runner.go:130] > # pinns_path = ""
	I1213 10:43:19.437553  390588 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1213 10:43:19.437560  390588 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1213 10:43:19.437567  390588 command_runner.go:130] > # enable_criu_support = true
	I1213 10:43:19.437573  390588 command_runner.go:130] > # Enable/disable the generation of the container,
	I1213 10:43:19.437579  390588 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1213 10:43:19.437586  390588 command_runner.go:130] > # enable_pod_events = false
	I1213 10:43:19.437593  390588 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 10:43:19.437598  390588 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1213 10:43:19.437604  390588 command_runner.go:130] > # default_runtime = "crun"
	I1213 10:43:19.437609  390588 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1213 10:43:19.437619  390588 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1213 10:43:19.437636  390588 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1213 10:43:19.437642  390588 command_runner.go:130] > # creation as a file is not desired either.
	I1213 10:43:19.437653  390588 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1213 10:43:19.437664  390588 command_runner.go:130] > # the hostname is being managed dynamically.
	I1213 10:43:19.437668  390588 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1213 10:43:19.437672  390588 command_runner.go:130] > # ]
	I1213 10:43:19.437678  390588 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1213 10:43:19.437685  390588 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1213 10:43:19.437693  390588 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1213 10:43:19.437708  390588 command_runner.go:130] > # Each entry in the table should follow the format:
	I1213 10:43:19.437715  390588 command_runner.go:130] > #
	I1213 10:43:19.437724  390588 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1213 10:43:19.437729  390588 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1213 10:43:19.437737  390588 command_runner.go:130] > # runtime_type = "oci"
	I1213 10:43:19.437742  390588 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1213 10:43:19.437752  390588 command_runner.go:130] > # inherit_default_runtime = false
	I1213 10:43:19.437760  390588 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1213 10:43:19.437764  390588 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1213 10:43:19.437769  390588 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1213 10:43:19.437775  390588 command_runner.go:130] > # monitor_env = []
	I1213 10:43:19.437780  390588 command_runner.go:130] > # privileged_without_host_devices = false
	I1213 10:43:19.437787  390588 command_runner.go:130] > # allowed_annotations = []
	I1213 10:43:19.437793  390588 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1213 10:43:19.437799  390588 command_runner.go:130] > # no_sync_log = false
	I1213 10:43:19.437803  390588 command_runner.go:130] > # default_annotations = {}
	I1213 10:43:19.437807  390588 command_runner.go:130] > # stream_websockets = false
	I1213 10:43:19.437810  390588 command_runner.go:130] > # seccomp_profile = ""
	I1213 10:43:19.437838  390588 command_runner.go:130] > # Where:
	I1213 10:43:19.437847  390588 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1213 10:43:19.437854  390588 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1213 10:43:19.437860  390588 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1213 10:43:19.437868  390588 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1213 10:43:19.437874  390588 command_runner.go:130] > #   in $PATH.
	I1213 10:43:19.437880  390588 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1213 10:43:19.437888  390588 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1213 10:43:19.437895  390588 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1213 10:43:19.437898  390588 command_runner.go:130] > #   state.
	I1213 10:43:19.437905  390588 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1213 10:43:19.437913  390588 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1213 10:43:19.437920  390588 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1213 10:43:19.437926  390588 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1213 10:43:19.437932  390588 command_runner.go:130] > #   the values from the default runtime on load time.
	I1213 10:43:19.437938  390588 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1213 10:43:19.437949  390588 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1213 10:43:19.437959  390588 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1213 10:43:19.437971  390588 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1213 10:43:19.437976  390588 command_runner.go:130] > #   The currently recognized values are:
	I1213 10:43:19.437983  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1213 10:43:19.437993  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1213 10:43:19.438000  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1213 10:43:19.438006  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1213 10:43:19.438017  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1213 10:43:19.438026  390588 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1213 10:43:19.438042  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1213 10:43:19.438048  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1213 10:43:19.438055  390588 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1213 10:43:19.438064  390588 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1213 10:43:19.438071  390588 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1213 10:43:19.438079  390588 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1213 10:43:19.438091  390588 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1213 10:43:19.438097  390588 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1213 10:43:19.438104  390588 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1213 10:43:19.438114  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1213 10:43:19.438123  390588 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1213 10:43:19.438128  390588 command_runner.go:130] > #   deprecated option "conmon".
	I1213 10:43:19.438135  390588 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1213 10:43:19.438145  390588 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1213 10:43:19.438153  390588 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1213 10:43:19.438160  390588 command_runner.go:130] > #   should be moved to the container's cgroup
	I1213 10:43:19.438168  390588 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1213 10:43:19.438173  390588 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1213 10:43:19.438182  390588 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1213 10:43:19.438186  390588 command_runner.go:130] > #   conmon-rs by using:
	I1213 10:43:19.438194  390588 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1213 10:43:19.438204  390588 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1213 10:43:19.438215  390588 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1213 10:43:19.438228  390588 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1213 10:43:19.438236  390588 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1213 10:43:19.438246  390588 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1213 10:43:19.438254  390588 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1213 10:43:19.438263  390588 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1213 10:43:19.438271  390588 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1213 10:43:19.438280  390588 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1213 10:43:19.438293  390588 command_runner.go:130] > #   when a machine crash happens.
	I1213 10:43:19.438300  390588 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1213 10:43:19.438308  390588 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1213 10:43:19.438322  390588 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1213 10:43:19.438327  390588 command_runner.go:130] > #   seccomp profile for the runtime.
	I1213 10:43:19.438335  390588 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1213 10:43:19.438343  390588 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1213 10:43:19.438346  390588 command_runner.go:130] > #
	I1213 10:43:19.438350  390588 command_runner.go:130] > # Using the seccomp notifier feature:
	I1213 10:43:19.438353  390588 command_runner.go:130] > #
	I1213 10:43:19.438359  390588 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1213 10:43:19.438370  390588 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1213 10:43:19.438376  390588 command_runner.go:130] > #
	I1213 10:43:19.438383  390588 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1213 10:43:19.438392  390588 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1213 10:43:19.438395  390588 command_runner.go:130] > #
	I1213 10:43:19.438401  390588 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1213 10:43:19.438406  390588 command_runner.go:130] > # feature.
	I1213 10:43:19.438410  390588 command_runner.go:130] > #
	I1213 10:43:19.438416  390588 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1213 10:43:19.438422  390588 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1213 10:43:19.438431  390588 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1213 10:43:19.438437  390588 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1213 10:43:19.438447  390588 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1213 10:43:19.438450  390588 command_runner.go:130] > #
	I1213 10:43:19.438456  390588 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1213 10:43:19.438465  390588 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1213 10:43:19.438471  390588 command_runner.go:130] > #
	I1213 10:43:19.438478  390588 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1213 10:43:19.438486  390588 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1213 10:43:19.438491  390588 command_runner.go:130] > #
	I1213 10:43:19.438497  390588 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1213 10:43:19.438512  390588 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1213 10:43:19.438516  390588 command_runner.go:130] > # limitation.
	I1213 10:43:19.438523  390588 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1213 10:43:19.438528  390588 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1213 10:43:19.438533  390588 command_runner.go:130] > runtime_type = ""
	I1213 10:43:19.438539  390588 command_runner.go:130] > runtime_root = "/run/crun"
	I1213 10:43:19.438543  390588 command_runner.go:130] > inherit_default_runtime = false
	I1213 10:43:19.438549  390588 command_runner.go:130] > runtime_config_path = ""
	I1213 10:43:19.438553  390588 command_runner.go:130] > container_min_memory = ""
	I1213 10:43:19.438560  390588 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 10:43:19.438564  390588 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 10:43:19.438577  390588 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 10:43:19.438581  390588 command_runner.go:130] > allowed_annotations = [
	I1213 10:43:19.438586  390588 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1213 10:43:19.438589  390588 command_runner.go:130] > ]
	I1213 10:43:19.438594  390588 command_runner.go:130] > privileged_without_host_devices = false
	I1213 10:43:19.438599  390588 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1213 10:43:19.438604  390588 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1213 10:43:19.438610  390588 command_runner.go:130] > runtime_type = ""
	I1213 10:43:19.438614  390588 command_runner.go:130] > runtime_root = "/run/runc"
	I1213 10:43:19.438617  390588 command_runner.go:130] > inherit_default_runtime = false
	I1213 10:43:19.438621  390588 command_runner.go:130] > runtime_config_path = ""
	I1213 10:43:19.438625  390588 command_runner.go:130] > container_min_memory = ""
	I1213 10:43:19.438633  390588 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 10:43:19.438639  390588 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 10:43:19.438644  390588 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 10:43:19.438649  390588 command_runner.go:130] > privileged_without_host_devices = false
	I1213 10:43:19.438664  390588 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1213 10:43:19.438673  390588 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1213 10:43:19.438684  390588 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1213 10:43:19.438692  390588 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1213 10:43:19.438702  390588 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1213 10:43:19.438712  390588 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1213 10:43:19.438728  390588 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1213 10:43:19.438734  390588 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1213 10:43:19.438743  390588 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1213 10:43:19.438755  390588 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1213 10:43:19.438761  390588 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1213 10:43:19.438772  390588 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1213 10:43:19.438778  390588 command_runner.go:130] > # Example:
	I1213 10:43:19.438782  390588 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1213 10:43:19.438787  390588 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1213 10:43:19.438793  390588 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1213 10:43:19.438801  390588 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1213 10:43:19.438806  390588 command_runner.go:130] > # cpuset = "0-1"
	I1213 10:43:19.438810  390588 command_runner.go:130] > # cpushares = "5"
	I1213 10:43:19.438814  390588 command_runner.go:130] > # cpuquota = "1000"
	I1213 10:43:19.438820  390588 command_runner.go:130] > # cpuperiod = "100000"
	I1213 10:43:19.438825  390588 command_runner.go:130] > # cpulimit = "35"
	I1213 10:43:19.438837  390588 command_runner.go:130] > # Where:
	I1213 10:43:19.438841  390588 command_runner.go:130] > # The workload name is workload-type.
	I1213 10:43:19.438852  390588 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1213 10:43:19.438861  390588 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1213 10:43:19.438866  390588 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1213 10:43:19.438875  390588 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1213 10:43:19.438880  390588 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1213 10:43:19.438885  390588 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1213 10:43:19.438894  390588 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1213 10:43:19.438905  390588 command_runner.go:130] > # Default value is set to true
	I1213 10:43:19.438910  390588 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1213 10:43:19.438915  390588 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1213 10:43:19.438925  390588 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1213 10:43:19.438932  390588 command_runner.go:130] > # Default value is set to 'false'
	I1213 10:43:19.438938  390588 command_runner.go:130] > # disable_hostport_mapping = false
	I1213 10:43:19.438943  390588 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1213 10:43:19.438951  390588 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1213 10:43:19.438954  390588 command_runner.go:130] > # timezone = ""
	I1213 10:43:19.438961  390588 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1213 10:43:19.438967  390588 command_runner.go:130] > #
	I1213 10:43:19.438973  390588 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1213 10:43:19.438979  390588 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1213 10:43:19.438983  390588 command_runner.go:130] > [crio.image]
	I1213 10:43:19.438993  390588 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1213 10:43:19.438999  390588 command_runner.go:130] > # default_transport = "docker://"
	I1213 10:43:19.439005  390588 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1213 10:43:19.439015  390588 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1213 10:43:19.439019  390588 command_runner.go:130] > # global_auth_file = ""
	I1213 10:43:19.439024  390588 command_runner.go:130] > # The image used to instantiate infra containers.
	I1213 10:43:19.439029  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.439034  390588 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1213 10:43:19.439040  390588 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1213 10:43:19.439048  390588 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1213 10:43:19.439055  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.439060  390588 command_runner.go:130] > # pause_image_auth_file = ""
	I1213 10:43:19.439066  390588 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1213 10:43:19.439072  390588 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1213 10:43:19.439081  390588 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1213 10:43:19.439087  390588 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1213 10:43:19.439094  390588 command_runner.go:130] > # pause_command = "/pause"
	I1213 10:43:19.439100  390588 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1213 10:43:19.439106  390588 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1213 10:43:19.439111  390588 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1213 10:43:19.439117  390588 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1213 10:43:19.439123  390588 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1213 10:43:19.439134  390588 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1213 10:43:19.439142  390588 command_runner.go:130] > # pinned_images = [
	I1213 10:43:19.439145  390588 command_runner.go:130] > # ]
	I1213 10:43:19.439151  390588 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1213 10:43:19.439157  390588 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1213 10:43:19.439166  390588 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1213 10:43:19.439172  390588 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1213 10:43:19.439180  390588 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1213 10:43:19.439184  390588 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1213 10:43:19.439190  390588 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1213 10:43:19.439197  390588 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1213 10:43:19.439203  390588 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1213 10:43:19.439209  390588 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1213 10:43:19.439223  390588 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1213 10:43:19.439228  390588 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1213 10:43:19.439234  390588 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1213 10:43:19.439243  390588 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1213 10:43:19.439247  390588 command_runner.go:130] > # changing them here.
	I1213 10:43:19.439253  390588 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1213 10:43:19.439260  390588 command_runner.go:130] > # insecure_registries = [
	I1213 10:43:19.439263  390588 command_runner.go:130] > # ]
	I1213 10:43:19.439268  390588 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1213 10:43:19.439273  390588 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1213 10:43:19.439723  390588 command_runner.go:130] > # image_volumes = "mkdir"
	I1213 10:43:19.439741  390588 command_runner.go:130] > # Temporary directory to use for storing big files
	I1213 10:43:19.439879  390588 command_runner.go:130] > # big_files_temporary_dir = ""
	I1213 10:43:19.439918  390588 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1213 10:43:19.439927  390588 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1213 10:43:19.439931  390588 command_runner.go:130] > # auto_reload_registries = false
	I1213 10:43:19.439937  390588 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1213 10:43:19.439946  390588 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1213 10:43:19.439958  390588 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1213 10:43:19.439963  390588 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1213 10:43:19.439974  390588 command_runner.go:130] > # The mode of short name resolution.
	I1213 10:43:19.439985  390588 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1213 10:43:19.439993  390588 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1213 10:43:19.440002  390588 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1213 10:43:19.440006  390588 command_runner.go:130] > # short_name_mode = "enforcing"
	I1213 10:43:19.440012  390588 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1213 10:43:19.440018  390588 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1213 10:43:19.440023  390588 command_runner.go:130] > # oci_artifact_mount_support = true
	I1213 10:43:19.440029  390588 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1213 10:43:19.440034  390588 command_runner.go:130] > # CNI plugins.
	I1213 10:43:19.440037  390588 command_runner.go:130] > [crio.network]
	I1213 10:43:19.440044  390588 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1213 10:43:19.440053  390588 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1213 10:43:19.440058  390588 command_runner.go:130] > # cni_default_network = ""
	I1213 10:43:19.440064  390588 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1213 10:43:19.440073  390588 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1213 10:43:19.440080  390588 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1213 10:43:19.440084  390588 command_runner.go:130] > # plugin_dirs = [
	I1213 10:43:19.440211  390588 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1213 10:43:19.440357  390588 command_runner.go:130] > # ]
	I1213 10:43:19.440384  390588 command_runner.go:130] > # List of included pod metrics.
	I1213 10:43:19.440392  390588 command_runner.go:130] > # included_pod_metrics = [
	I1213 10:43:19.440401  390588 command_runner.go:130] > # ]
	I1213 10:43:19.440408  390588 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1213 10:43:19.440418  390588 command_runner.go:130] > [crio.metrics]
	I1213 10:43:19.440423  390588 command_runner.go:130] > # Globally enable or disable metrics support.
	I1213 10:43:19.440436  390588 command_runner.go:130] > # enable_metrics = false
	I1213 10:43:19.440441  390588 command_runner.go:130] > # Specify enabled metrics collectors.
	I1213 10:43:19.440446  390588 command_runner.go:130] > # Per default all metrics are enabled.
	I1213 10:43:19.440452  390588 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1213 10:43:19.440460  390588 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1213 10:43:19.440472  390588 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1213 10:43:19.440477  390588 command_runner.go:130] > # metrics_collectors = [
	I1213 10:43:19.440481  390588 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1213 10:43:19.440496  390588 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1213 10:43:19.440501  390588 command_runner.go:130] > # 	"containers_oom_total",
	I1213 10:43:19.440506  390588 command_runner.go:130] > # 	"processes_defunct",
	I1213 10:43:19.440509  390588 command_runner.go:130] > # 	"operations_total",
	I1213 10:43:19.440637  390588 command_runner.go:130] > # 	"operations_latency_seconds",
	I1213 10:43:19.440664  390588 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1213 10:43:19.440670  390588 command_runner.go:130] > # 	"operations_errors_total",
	I1213 10:43:19.440688  390588 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1213 10:43:19.440696  390588 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1213 10:43:19.440701  390588 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1213 10:43:19.440705  390588 command_runner.go:130] > # 	"image_pulls_success_total",
	I1213 10:43:19.440716  390588 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1213 10:43:19.440720  390588 command_runner.go:130] > # 	"containers_oom_count_total",
	I1213 10:43:19.440726  390588 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1213 10:43:19.440734  390588 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1213 10:43:19.440739  390588 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1213 10:43:19.440742  390588 command_runner.go:130] > # ]
	I1213 10:43:19.440749  390588 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1213 10:43:19.440758  390588 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1213 10:43:19.440764  390588 command_runner.go:130] > # The port on which the metrics server will listen.
	I1213 10:43:19.440768  390588 command_runner.go:130] > # metrics_port = 9090
	I1213 10:43:19.440773  390588 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1213 10:43:19.440901  390588 command_runner.go:130] > # metrics_socket = ""
	I1213 10:43:19.440915  390588 command_runner.go:130] > # The certificate for the secure metrics server.
	I1213 10:43:19.440937  390588 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1213 10:43:19.440950  390588 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1213 10:43:19.440955  390588 command_runner.go:130] > # certificate on any modification event.
	I1213 10:43:19.440959  390588 command_runner.go:130] > # metrics_cert = ""
	I1213 10:43:19.440964  390588 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1213 10:43:19.440969  390588 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1213 10:43:19.440972  390588 command_runner.go:130] > # metrics_key = ""
	I1213 10:43:19.440978  390588 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1213 10:43:19.440982  390588 command_runner.go:130] > [crio.tracing]
	I1213 10:43:19.440995  390588 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1213 10:43:19.441000  390588 command_runner.go:130] > # enable_tracing = false
	I1213 10:43:19.441006  390588 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1213 10:43:19.441015  390588 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1213 10:43:19.441022  390588 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1213 10:43:19.441031  390588 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1213 10:43:19.441039  390588 command_runner.go:130] > # CRI-O NRI configuration.
	I1213 10:43:19.441042  390588 command_runner.go:130] > [crio.nri]
	I1213 10:43:19.441047  390588 command_runner.go:130] > # Globally enable or disable NRI.
	I1213 10:43:19.441253  390588 command_runner.go:130] > # enable_nri = true
	I1213 10:43:19.441268  390588 command_runner.go:130] > # NRI socket to listen on.
	I1213 10:43:19.441274  390588 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1213 10:43:19.441278  390588 command_runner.go:130] > # NRI plugin directory to use.
	I1213 10:43:19.441283  390588 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1213 10:43:19.441288  390588 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1213 10:43:19.441293  390588 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1213 10:43:19.441298  390588 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1213 10:43:19.441355  390588 command_runner.go:130] > # nri_disable_connections = false
	I1213 10:43:19.441365  390588 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1213 10:43:19.441370  390588 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1213 10:43:19.441374  390588 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1213 10:43:19.441379  390588 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1213 10:43:19.441384  390588 command_runner.go:130] > # NRI default validator configuration.
	I1213 10:43:19.441391  390588 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1213 10:43:19.441401  390588 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1213 10:43:19.441405  390588 command_runner.go:130] > # can be restricted/rejected:
	I1213 10:43:19.441417  390588 command_runner.go:130] > # - OCI hook injection
	I1213 10:43:19.441427  390588 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1213 10:43:19.441435  390588 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1213 10:43:19.441440  390588 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1213 10:43:19.441444  390588 command_runner.go:130] > # - adjustment of linux namespaces
	I1213 10:43:19.441453  390588 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1213 10:43:19.441460  390588 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1213 10:43:19.441466  390588 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1213 10:43:19.441469  390588 command_runner.go:130] > #
	I1213 10:43:19.441473  390588 command_runner.go:130] > # [crio.nri.default_validator]
	I1213 10:43:19.441480  390588 command_runner.go:130] > # nri_enable_default_validator = false
	I1213 10:43:19.441485  390588 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1213 10:43:19.441629  390588 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1213 10:43:19.441658  390588 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1213 10:43:19.441671  390588 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1213 10:43:19.441677  390588 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1213 10:43:19.441685  390588 command_runner.go:130] > # nri_validator_required_plugins = [
	I1213 10:43:19.441688  390588 command_runner.go:130] > # ]
	I1213 10:43:19.441694  390588 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1213 10:43:19.441700  390588 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1213 10:43:19.441709  390588 command_runner.go:130] > [crio.stats]
	I1213 10:43:19.441720  390588 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1213 10:43:19.441730  390588 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1213 10:43:19.441734  390588 command_runner.go:130] > # stats_collection_period = 0
	I1213 10:43:19.441743  390588 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1213 10:43:19.441752  390588 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1213 10:43:19.441756  390588 command_runner.go:130] > # collection_period = 0
	I1213 10:43:19.443275  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.403988128Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1213 10:43:19.443305  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404025092Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1213 10:43:19.443315  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404051931Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1213 10:43:19.443326  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404076596Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1213 10:43:19.443340  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404148548Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:19.443352  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404414955Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1213 10:43:19.443364  390588 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1213 10:43:19.443836  390588 cni.go:84] Creating CNI manager for ""
	I1213 10:43:19.443854  390588 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:43:19.443875  390588 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:43:19.443898  390588 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-407525 NodeName:functional-407525 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:43:19.444025  390588 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-407525"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:43:19.444095  390588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:43:19.450891  390588 command_runner.go:130] > kubeadm
	I1213 10:43:19.450967  390588 command_runner.go:130] > kubectl
	I1213 10:43:19.450987  390588 command_runner.go:130] > kubelet
	I1213 10:43:19.451803  390588 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:43:19.451864  390588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:43:19.459352  390588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 10:43:19.471938  390588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:43:19.485136  390588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 10:43:19.498010  390588 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:43:19.501925  390588 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 10:43:19.502045  390588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:43:19.620049  390588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:43:20.022042  390588 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525 for IP: 192.168.49.2
	I1213 10:43:20.022188  390588 certs.go:195] generating shared ca certs ...
	I1213 10:43:20.022221  390588 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:43:20.022446  390588 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 10:43:20.022567  390588 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 10:43:20.022606  390588 certs.go:257] generating profile certs ...
	I1213 10:43:20.022771  390588 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key
	I1213 10:43:20.022893  390588 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key.2185ee04
	I1213 10:43:20.023000  390588 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key
	I1213 10:43:20.023048  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 10:43:20.023081  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 10:43:20.023123  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 10:43:20.023158  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 10:43:20.023202  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 10:43:20.023238  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 10:43:20.023279  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 10:43:20.023318  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 10:43:20.023431  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 10:43:20.023496  390588 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 10:43:20.023540  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:43:20.023607  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:43:20.023670  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:43:20.023728  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 10:43:20.023828  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 10:43:20.023897  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.023941  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem -> /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.023985  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.024591  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:43:20.049939  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:43:20.071962  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:43:20.093520  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:43:20.117621  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:43:20.135349  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:43:20.152883  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:43:20.170121  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 10:43:20.188254  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:43:20.205892  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 10:43:20.223561  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 10:43:20.241467  390588 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:43:20.254691  390588 ssh_runner.go:195] Run: openssl version
	I1213 10:43:20.260777  390588 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 10:43:20.261193  390588 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.268769  390588 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 10:43:20.276440  390588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.280293  390588 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.280332  390588 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.280379  390588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.320848  390588 command_runner.go:130] > 3ec20f2e
	I1213 10:43:20.321296  390588 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:43:20.328708  390588 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.335901  390588 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:43:20.343392  390588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.347019  390588 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.347264  390588 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.347323  390588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.388019  390588 command_runner.go:130] > b5213941
	I1213 10:43:20.388604  390588 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:43:20.396066  390588 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.403389  390588 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 10:43:20.410914  390588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.414772  390588 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.414823  390588 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.414888  390588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.455731  390588 command_runner.go:130] > 51391683
	I1213 10:43:20.456248  390588 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:43:20.463583  390588 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:43:20.467136  390588 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:43:20.467160  390588 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 10:43:20.467167  390588 command_runner.go:130] > Device: 259,1	Inode: 1322536     Links: 1
	I1213 10:43:20.467174  390588 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:43:20.467180  390588 command_runner.go:130] > Access: 2025-12-13 10:39:12.482590700 +0000
	I1213 10:43:20.467186  390588 command_runner.go:130] > Modify: 2025-12-13 10:35:08.216365089 +0000
	I1213 10:43:20.467191  390588 command_runner.go:130] > Change: 2025-12-13 10:35:08.216365089 +0000
	I1213 10:43:20.467197  390588 command_runner.go:130] >  Birth: 2025-12-13 10:35:08.216365089 +0000
	I1213 10:43:20.467264  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:43:20.507794  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.508276  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:43:20.549373  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.549450  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:43:20.591501  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.592041  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:43:20.633163  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.633239  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:43:20.673681  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.674235  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:43:20.714863  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.715372  390588 kubeadm.go:401] StartCluster: {Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:43:20.715472  390588 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:43:20.715572  390588 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:43:20.742591  390588 cri.go:89] found id: ""
	I1213 10:43:20.742663  390588 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:43:20.749676  390588 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 10:43:20.749696  390588 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 10:43:20.749703  390588 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 10:43:20.750605  390588 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:43:20.750650  390588 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:43:20.750723  390588 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:43:20.758246  390588 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:43:20.758662  390588 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-407525" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:20.758765  390588 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-354468/kubeconfig needs updating (will repair): [kubeconfig missing "functional-407525" cluster setting kubeconfig missing "functional-407525" context setting]
	I1213 10:43:20.759076  390588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:43:20.759474  390588 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:20.759724  390588 kapi.go:59] client config for functional-407525: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key", CAFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:43:20.760259  390588 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 10:43:20.760282  390588 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 10:43:20.760289  390588 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 10:43:20.760294  390588 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 10:43:20.760299  390588 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 10:43:20.760595  390588 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:43:20.760675  390588 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 10:43:20.768313  390588 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 10:43:20.768394  390588 kubeadm.go:602] duration metric: took 17.723293ms to restartPrimaryControlPlane
	I1213 10:43:20.768419  390588 kubeadm.go:403] duration metric: took 53.05457ms to StartCluster
	I1213 10:43:20.768469  390588 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:43:20.768581  390588 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:20.769195  390588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:43:20.769470  390588 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:43:20.769730  390588 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:43:20.769792  390588 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:43:20.769868  390588 addons.go:70] Setting storage-provisioner=true in profile "functional-407525"
	I1213 10:43:20.769887  390588 addons.go:239] Setting addon storage-provisioner=true in "functional-407525"
	I1213 10:43:20.769967  390588 host.go:66] Checking if "functional-407525" exists ...
	I1213 10:43:20.770424  390588 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:43:20.770582  390588 addons.go:70] Setting default-storageclass=true in profile "functional-407525"
	I1213 10:43:20.770602  390588 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-407525"
	I1213 10:43:20.770845  390588 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:43:20.776047  390588 out.go:179] * Verifying Kubernetes components...
	I1213 10:43:20.778873  390588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:43:20.803376  390588 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:43:20.806823  390588 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:20.806848  390588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:43:20.806911  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:20.815503  390588 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:20.815748  390588 kapi.go:59] client config for functional-407525: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key", CAFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:43:20.816048  390588 addons.go:239] Setting addon default-storageclass=true in "functional-407525"
	I1213 10:43:20.816085  390588 host.go:66] Checking if "functional-407525" exists ...
	I1213 10:43:20.816499  390588 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:43:20.849236  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:20.860497  390588 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:20.860524  390588 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:43:20.860587  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:20.893135  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:20.991835  390588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:43:21.017033  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:21.050080  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:21.773497  390588 node_ready.go:35] waiting up to 6m0s for node "functional-407525" to be "Ready" ...
	I1213 10:43:21.773656  390588 type.go:168] "Request Body" body=""
	I1213 10:43:21.773729  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:21.774009  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:21.774035  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:21.774063  390588 retry.go:31] will retry after 178.71376ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:21.774107  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:21.774121  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:21.774127  390588 retry.go:31] will retry after 267.498ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:21.774194  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:21.953713  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:22.014320  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.018022  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.018057  390588 retry.go:31] will retry after 328.520116ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.042240  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:22.097866  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.101425  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.101460  390588 retry.go:31] will retry after 340.23882ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.273721  390588 type.go:168] "Request Body" body=""
	I1213 10:43:22.273821  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:22.274173  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:22.347588  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:22.405090  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.408724  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.408759  390588 retry.go:31] will retry after 330.053163ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.441890  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:22.497250  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.500831  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.500864  390588 retry.go:31] will retry after 301.657591ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.739051  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:22.774467  390588 type.go:168] "Request Body" body=""
	I1213 10:43:22.774545  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:22.774882  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:22.796776  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.800408  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.800485  390588 retry.go:31] will retry after 1.110001612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.803607  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:22.863746  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.863797  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.863816  390588 retry.go:31] will retry after 925.323482ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:23.274339  390588 type.go:168] "Request Body" body=""
	I1213 10:43:23.274464  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:23.274793  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:23.774657  390588 type.go:168] "Request Body" body=""
	I1213 10:43:23.774742  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:23.775115  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:23.775193  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:23.789322  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:23.850165  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:23.853613  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:23.853701  390588 retry.go:31] will retry after 1.468677433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:23.910870  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:23.967004  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:23.970690  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:23.970723  390588 retry.go:31] will retry after 1.30336677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:24.274187  390588 type.go:168] "Request Body" body=""
	I1213 10:43:24.274270  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:24.274613  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:24.773719  390588 type.go:168] "Request Body" body=""
	I1213 10:43:24.773812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:24.774104  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:25.273868  390588 type.go:168] "Request Body" body=""
	I1213 10:43:25.273973  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:25.274299  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:25.274422  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:25.322752  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:25.335088  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:25.335126  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:25.335146  390588 retry.go:31] will retry after 1.31175111s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:25.389173  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:25.389228  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:25.389247  390588 retry.go:31] will retry after 1.937290048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:25.773818  390588 type.go:168] "Request Body" body=""
	I1213 10:43:25.773896  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:25.774238  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:26.274714  390588 type.go:168] "Request Body" body=""
	I1213 10:43:26.274790  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:26.275116  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:26.275175  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:26.647823  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:26.708762  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:26.708815  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:26.708835  390588 retry.go:31] will retry after 2.338895321s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:26.773966  390588 type.go:168] "Request Body" body=""
	I1213 10:43:26.774052  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:26.774373  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:27.273820  390588 type.go:168] "Request Body" body=""
	I1213 10:43:27.273894  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:27.274223  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:27.327657  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:27.389087  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:27.389124  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:27.389154  390588 retry.go:31] will retry after 3.77996712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:27.774250  390588 type.go:168] "Request Body" body=""
	I1213 10:43:27.774347  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:27.774610  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:28.274520  390588 type.go:168] "Request Body" body=""
	I1213 10:43:28.274639  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:28.275025  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:28.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:43:28.773830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:28.774175  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:28.774230  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:29.048671  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:29.108913  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:29.108956  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:29.108976  390588 retry.go:31] will retry after 6.196055786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:29.274133  390588 type.go:168] "Request Body" body=""
	I1213 10:43:29.274210  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:29.274535  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:29.774410  390588 type.go:168] "Request Body" body=""
	I1213 10:43:29.774493  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:29.774856  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:30.274678  390588 type.go:168] "Request Body" body=""
	I1213 10:43:30.274752  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:30.275098  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:30.774546  390588 type.go:168] "Request Body" body=""
	I1213 10:43:30.774615  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:30.774881  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:30.774922  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:31.169380  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:31.223779  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:31.227282  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:31.227315  390588 retry.go:31] will retry after 4.701439473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:31.274644  390588 type.go:168] "Request Body" body=""
	I1213 10:43:31.274723  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:31.275035  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:31.773748  390588 type.go:168] "Request Body" body=""
	I1213 10:43:31.773838  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:31.774143  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:32.273714  390588 type.go:168] "Request Body" body=""
	I1213 10:43:32.273813  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:32.274119  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:32.773782  390588 type.go:168] "Request Body" body=""
	I1213 10:43:32.773855  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:32.774160  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:33.273748  390588 type.go:168] "Request Body" body=""
	I1213 10:43:33.273823  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:33.274181  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:33.274234  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:33.773733  390588 type.go:168] "Request Body" body=""
	I1213 10:43:33.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:33.774115  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:34.273812  390588 type.go:168] "Request Body" body=""
	I1213 10:43:34.273904  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:34.274296  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:34.773742  390588 type.go:168] "Request Body" body=""
	I1213 10:43:34.773818  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:34.774139  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:35.273828  390588 type.go:168] "Request Body" body=""
	I1213 10:43:35.273922  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:35.274192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:35.305578  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:35.371590  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:35.371636  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:35.371657  390588 retry.go:31] will retry after 5.458500829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:35.773766  390588 type.go:168] "Request Body" body=""
	I1213 10:43:35.773846  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:35.774186  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:35.774236  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:35.929536  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:35.989448  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:35.989487  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:35.989506  390588 retry.go:31] will retry after 5.007301518s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:36.274095  390588 type.go:168] "Request Body" body=""
	I1213 10:43:36.274168  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:36.274462  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:36.774043  390588 type.go:168] "Request Body" body=""
	I1213 10:43:36.774126  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:36.774417  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:37.273790  390588 type.go:168] "Request Body" body=""
	I1213 10:43:37.273882  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:37.274210  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:37.773915  390588 type.go:168] "Request Body" body=""
	I1213 10:43:37.773996  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:37.774325  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:37.774386  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:38.274036  390588 type.go:168] "Request Body" body=""
	I1213 10:43:38.274110  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:38.274365  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:38.773780  390588 type.go:168] "Request Body" body=""
	I1213 10:43:38.773871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:38.774179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:39.273872  390588 type.go:168] "Request Body" body=""
	I1213 10:43:39.273948  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:39.274270  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:39.773709  390588 type.go:168] "Request Body" body=""
	I1213 10:43:39.773784  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:39.774053  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:40.273825  390588 type.go:168] "Request Body" body=""
	I1213 10:43:40.273899  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:40.274244  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:40.274309  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:40.774007  390588 type.go:168] "Request Body" body=""
	I1213 10:43:40.774083  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:40.774431  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:40.830857  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:40.888820  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:40.888869  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:40.888889  390588 retry.go:31] will retry after 11.437774943s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:40.997102  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:41.058447  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:41.058511  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:41.058532  390588 retry.go:31] will retry after 7.34875984s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:41.275648  390588 type.go:168] "Request Body" body=""
	I1213 10:43:41.275736  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:41.275995  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:41.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:43:41.773833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:41.774173  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:42.273927  390588 type.go:168] "Request Body" body=""
	I1213 10:43:42.274020  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:42.274372  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:42.274432  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:42.773693  390588 type.go:168] "Request Body" body=""
	I1213 10:43:42.773768  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:42.774092  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:43.273808  390588 type.go:168] "Request Body" body=""
	I1213 10:43:43.273880  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:43.274204  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:43.773920  390588 type.go:168] "Request Body" body=""
	I1213 10:43:43.774021  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:43.774340  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:44.274591  390588 type.go:168] "Request Body" body=""
	I1213 10:43:44.274666  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:44.274925  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:44.274974  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:44.773692  390588 type.go:168] "Request Body" body=""
	I1213 10:43:44.773775  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:44.774117  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:45.273902  390588 type.go:168] "Request Body" body=""
	I1213 10:43:45.273985  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:45.274305  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:45.773737  390588 type.go:168] "Request Body" body=""
	I1213 10:43:45.773808  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:45.774115  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:46.273797  390588 type.go:168] "Request Body" body=""
	I1213 10:43:46.273879  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:46.274217  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:46.774024  390588 type.go:168] "Request Body" body=""
	I1213 10:43:46.774120  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:46.774453  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:46.774515  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:47.273671  390588 type.go:168] "Request Body" body=""
	I1213 10:43:47.273742  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:47.274050  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:47.773764  390588 type.go:168] "Request Body" body=""
	I1213 10:43:47.773857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:47.774219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:48.273933  390588 type.go:168] "Request Body" body=""
	I1213 10:43:48.274033  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:48.274397  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:48.407754  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:48.470395  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:48.474021  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:48.474053  390588 retry.go:31] will retry after 19.108505533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:48.774398  390588 type.go:168] "Request Body" body=""
	I1213 10:43:48.774473  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:48.774751  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:48.774803  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:49.274554  390588 type.go:168] "Request Body" body=""
	I1213 10:43:49.274627  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:49.274988  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:49.773726  390588 type.go:168] "Request Body" body=""
	I1213 10:43:49.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:49.774191  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:50.273886  390588 type.go:168] "Request Body" body=""
	I1213 10:43:50.273967  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:50.274244  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:50.774213  390588 type.go:168] "Request Body" body=""
	I1213 10:43:50.774312  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:50.774666  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:51.274525  390588 type.go:168] "Request Body" body=""
	I1213 10:43:51.274611  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:51.274924  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:51.274971  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:51.774634  390588 type.go:168] "Request Body" body=""
	I1213 10:43:51.774715  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:51.774977  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:52.273715  390588 type.go:168] "Request Body" body=""
	I1213 10:43:52.273797  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:52.274174  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:52.327551  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:52.388989  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:52.389038  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:52.389058  390588 retry.go:31] will retry after 15.332526016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:52.774665  390588 type.go:168] "Request Body" body=""
	I1213 10:43:52.774747  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:52.775066  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:53.273766  390588 type.go:168] "Request Body" body=""
	I1213 10:43:53.273838  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:53.274095  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:53.773791  390588 type.go:168] "Request Body" body=""
	I1213 10:43:53.773894  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:53.774202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:53.774258  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:54.273942  390588 type.go:168] "Request Body" body=""
	I1213 10:43:54.274024  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:54.274379  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:54.774619  390588 type.go:168] "Request Body" body=""
	I1213 10:43:54.774685  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:54.774981  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:55.273695  390588 type.go:168] "Request Body" body=""
	I1213 10:43:55.273772  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:55.274098  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:55.774730  390588 type.go:168] "Request Body" body=""
	I1213 10:43:55.774809  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:55.775152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:55.775209  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:56.273860  390588 type.go:168] "Request Body" body=""
	I1213 10:43:56.273937  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:56.274197  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:56.773778  390588 type.go:168] "Request Body" body=""
	I1213 10:43:56.773872  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:56.774188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:57.273779  390588 type.go:168] "Request Body" body=""
	I1213 10:43:57.273871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:57.274186  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:57.774399  390588 type.go:168] "Request Body" body=""
	I1213 10:43:57.774475  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:57.774745  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:58.274628  390588 type.go:168] "Request Body" body=""
	I1213 10:43:58.274703  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:58.275023  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:58.275075  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:58.773728  390588 type.go:168] "Request Body" body=""
	I1213 10:43:58.773808  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:58.774138  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:59.274411  390588 type.go:168] "Request Body" body=""
	I1213 10:43:59.274483  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:59.274749  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:59.774554  390588 type.go:168] "Request Body" body=""
	I1213 10:43:59.774628  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:59.774978  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:00.273734  390588 type.go:168] "Request Body" body=""
	I1213 10:44:00.273827  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:00.274198  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:00.774634  390588 type.go:168] "Request Body" body=""
	I1213 10:44:00.774714  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:00.775059  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:00.775121  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:01.273670  390588 type.go:168] "Request Body" body=""
	I1213 10:44:01.273742  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:01.274061  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:01.773708  390588 type.go:168] "Request Body" body=""
	I1213 10:44:01.773778  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:01.774062  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:02.273799  390588 type.go:168] "Request Body" body=""
	I1213 10:44:02.273872  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:02.274204  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:02.773760  390588 type.go:168] "Request Body" body=""
	I1213 10:44:02.773840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:02.774185  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:03.273713  390588 type.go:168] "Request Body" body=""
	I1213 10:44:03.273804  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:03.274108  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:03.274159  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:03.773781  390588 type.go:168] "Request Body" body=""
	I1213 10:44:03.773856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:03.774368  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:04.273809  390588 type.go:168] "Request Body" body=""
	I1213 10:44:04.273910  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:04.274228  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:04.773901  390588 type.go:168] "Request Body" body=""
	I1213 10:44:04.773977  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:04.774242  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:05.273787  390588 type.go:168] "Request Body" body=""
	I1213 10:44:05.273861  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:05.274193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:05.274252  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:05.773910  390588 type.go:168] "Request Body" body=""
	I1213 10:44:05.774005  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:05.774314  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:06.274302  390588 type.go:168] "Request Body" body=""
	I1213 10:44:06.274372  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:06.274644  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:06.774485  390588 type.go:168] "Request Body" body=""
	I1213 10:44:06.774567  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:06.774982  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:07.273730  390588 type.go:168] "Request Body" body=""
	I1213 10:44:07.273828  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:07.274146  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:07.583825  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:44:07.646535  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:07.646580  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:07.646600  390588 retry.go:31] will retry after 14.697551715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:07.722798  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:44:07.774314  390588 type.go:168] "Request Body" body=""
	I1213 10:44:07.774386  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:07.774682  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:07.774739  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:07.791129  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:07.791173  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:07.791194  390588 retry.go:31] will retry after 13.531528334s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:08.273899  390588 type.go:168] "Request Body" body=""
	I1213 10:44:08.273980  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:08.274336  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:08.774067  390588 type.go:168] "Request Body" body=""
	I1213 10:44:08.774147  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:08.774508  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:09.274290  390588 type.go:168] "Request Body" body=""
	I1213 10:44:09.274369  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:09.274678  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:09.774447  390588 type.go:168] "Request Body" body=""
	I1213 10:44:09.774528  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:09.774864  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:09.774936  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:10.274570  390588 type.go:168] "Request Body" body=""
	I1213 10:44:10.274657  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:10.274961  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:10.774562  390588 type.go:168] "Request Body" body=""
	I1213 10:44:10.774642  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:10.774915  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:11.273679  390588 type.go:168] "Request Body" body=""
	I1213 10:44:11.273789  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:11.274110  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:11.773783  390588 type.go:168] "Request Body" body=""
	I1213 10:44:11.773865  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:11.774164  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:12.273719  390588 type.go:168] "Request Body" body=""
	I1213 10:44:12.273786  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:12.274058  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:12.274098  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:12.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:44:12.773833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:12.774136  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:13.273776  390588 type.go:168] "Request Body" body=""
	I1213 10:44:13.273875  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:13.274215  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:13.773721  390588 type.go:168] "Request Body" body=""
	I1213 10:44:13.773787  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:13.774066  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:14.273794  390588 type.go:168] "Request Body" body=""
	I1213 10:44:14.273871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:14.274227  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:14.274283  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:14.773929  390588 type.go:168] "Request Body" body=""
	I1213 10:44:14.774010  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:14.774363  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:15.273657  390588 type.go:168] "Request Body" body=""
	I1213 10:44:15.273724  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:15.273985  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:15.773757  390588 type.go:168] "Request Body" body=""
	I1213 10:44:15.773863  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:15.774190  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:16.274139  390588 type.go:168] "Request Body" body=""
	I1213 10:44:16.274221  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:16.274567  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:16.274622  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:16.774305  390588 type.go:168] "Request Body" body=""
	I1213 10:44:16.774378  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:16.774644  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:17.274446  390588 type.go:168] "Request Body" body=""
	I1213 10:44:17.274528  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:17.274866  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:17.774497  390588 type.go:168] "Request Body" body=""
	I1213 10:44:17.774575  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:17.774899  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:18.274657  390588 type.go:168] "Request Body" body=""
	I1213 10:44:18.274734  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:18.275051  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:18.275096  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:18.773787  390588 type.go:168] "Request Body" body=""
	I1213 10:44:18.773872  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:18.774209  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:19.273910  390588 type.go:168] "Request Body" body=""
	I1213 10:44:19.273985  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:19.274345  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:19.774026  390588 type.go:168] "Request Body" body=""
	I1213 10:44:19.774099  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:19.774355  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:20.273801  390588 type.go:168] "Request Body" body=""
	I1213 10:44:20.273913  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:20.274223  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:20.773981  390588 type.go:168] "Request Body" body=""
	I1213 10:44:20.774053  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:20.774366  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:20.774423  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:21.274357  390588 type.go:168] "Request Body" body=""
	I1213 10:44:21.274428  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:21.274706  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:21.323061  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:44:21.389635  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:21.389682  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:21.389701  390588 retry.go:31] will retry after 37.789083594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:21.773791  390588 type.go:168] "Request Body" body=""
	I1213 10:44:21.773876  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:21.774224  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:22.273915  390588 type.go:168] "Request Body" body=""
	I1213 10:44:22.273997  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:22.274345  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:22.344570  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:44:22.405449  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:22.405493  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:22.405512  390588 retry.go:31] will retry after 23.725920264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:22.773711  390588 type.go:168] "Request Body" body=""
	I1213 10:44:22.773782  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:22.774033  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:23.273757  390588 type.go:168] "Request Body" body=""
	I1213 10:44:23.273859  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:23.274206  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:23.274261  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:23.773694  390588 type.go:168] "Request Body" body=""
	I1213 10:44:23.773766  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:23.774054  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:24.274441  390588 type.go:168] "Request Body" body=""
	I1213 10:44:24.274518  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:24.274774  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:24.774608  390588 type.go:168] "Request Body" body=""
	I1213 10:44:24.774678  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:24.774999  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:25.274658  390588 type.go:168] "Request Body" body=""
	I1213 10:44:25.274733  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:25.275077  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:25.275131  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:25.774431  390588 type.go:168] "Request Body" body=""
	I1213 10:44:25.774508  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:25.774773  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:26.274739  390588 type.go:168] "Request Body" body=""
	I1213 10:44:26.274817  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:26.275144  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:26.773790  390588 type.go:168] "Request Body" body=""
	I1213 10:44:26.773863  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:26.774173  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:27.274455  390588 type.go:168] "Request Body" body=""
	I1213 10:44:27.274547  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:27.274811  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:27.774572  390588 type.go:168] "Request Body" body=""
	I1213 10:44:27.774642  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:27.774952  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:27.775003  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:28.274705  390588 type.go:168] "Request Body" body=""
	I1213 10:44:28.274777  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:28.275087  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:28.773642  390588 type.go:168] "Request Body" body=""
	I1213 10:44:28.773716  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:28.773982  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:29.273745  390588 type.go:168] "Request Body" body=""
	I1213 10:44:29.273822  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:29.274155  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:29.773835  390588 type.go:168] "Request Body" body=""
	I1213 10:44:29.773917  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:29.774248  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:30.274557  390588 type.go:168] "Request Body" body=""
	I1213 10:44:30.274641  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:30.274916  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:30.274971  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:30.774540  390588 type.go:168] "Request Body" body=""
	I1213 10:44:30.774632  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:30.774962  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:31.273679  390588 type.go:168] "Request Body" body=""
	I1213 10:44:31.273750  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:31.274077  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:31.774321  390588 type.go:168] "Request Body" body=""
	I1213 10:44:31.774386  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:31.774707  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:32.274525  390588 type.go:168] "Request Body" body=""
	I1213 10:44:32.274604  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:32.274936  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:32.274993  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:32.774698  390588 type.go:168] "Request Body" body=""
	I1213 10:44:32.774804  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:32.775108  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:33.274456  390588 type.go:168] "Request Body" body=""
	I1213 10:44:33.274529  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:33.274787  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:33.774581  390588 type.go:168] "Request Body" body=""
	I1213 10:44:33.774664  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:33.775008  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:34.274708  390588 type.go:168] "Request Body" body=""
	I1213 10:44:34.274794  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:34.275152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:34.275214  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:34.773858  390588 type.go:168] "Request Body" body=""
	I1213 10:44:34.773932  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:34.774188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:35.273780  390588 type.go:168] "Request Body" body=""
	I1213 10:44:35.273867  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:35.274233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:35.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:44:35.773852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:35.774179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:36.273930  390588 type.go:168] "Request Body" body=""
	I1213 10:44:36.274033  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:36.274307  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:36.773735  390588 type.go:168] "Request Body" body=""
	I1213 10:44:36.773807  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:36.774161  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:36.774233  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:37.273748  390588 type.go:168] "Request Body" body=""
	I1213 10:44:37.273822  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:37.274140  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:37.774404  390588 type.go:168] "Request Body" body=""
	I1213 10:44:37.774471  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:37.774822  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:38.274598  390588 type.go:168] "Request Body" body=""
	I1213 10:44:38.274669  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:38.274999  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:38.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:44:38.773807  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:38.774142  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:39.274495  390588 type.go:168] "Request Body" body=""
	I1213 10:44:39.274562  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:39.274851  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:39.274908  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:39.774657  390588 type.go:168] "Request Body" body=""
	I1213 10:44:39.774730  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:39.775049  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:40.273772  390588 type.go:168] "Request Body" body=""
	I1213 10:44:40.273847  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:40.274166  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:40.774227  390588 type.go:168] "Request Body" body=""
	I1213 10:44:40.774300  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:40.774572  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:41.274605  390588 type.go:168] "Request Body" body=""
	I1213 10:44:41.274676  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:41.275014  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:41.275084  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:41.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:44:41.773824  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:41.774152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:42.273842  390588 type.go:168] "Request Body" body=""
	I1213 10:44:42.273921  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:42.274231  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:42.773931  390588 type.go:168] "Request Body" body=""
	I1213 10:44:42.774027  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:42.774383  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:43.273973  390588 type.go:168] "Request Body" body=""
	I1213 10:44:43.274062  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:43.274409  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:43.773648  390588 type.go:168] "Request Body" body=""
	I1213 10:44:43.773733  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:43.773987  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:43.774033  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:44.273702  390588 type.go:168] "Request Body" body=""
	I1213 10:44:44.273808  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:44.274146  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:44.773881  390588 type.go:168] "Request Body" body=""
	I1213 10:44:44.773958  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:44.774291  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:45.273983  390588 type.go:168] "Request Body" body=""
	I1213 10:44:45.274063  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:45.274356  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:45.773766  390588 type.go:168] "Request Body" body=""
	I1213 10:44:45.773844  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:45.774176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:45.774231  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:46.131654  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:44:46.194295  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:46.194358  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:46.194451  390588 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:44:46.274603  390588 type.go:168] "Request Body" body=""
	I1213 10:44:46.274700  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:46.275072  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:46.774037  390588 type.go:168] "Request Body" body=""
	I1213 10:44:46.774112  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:46.774387  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:47.273782  390588 type.go:168] "Request Body" body=""
	I1213 10:44:47.273858  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:47.274208  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:47.773755  390588 type.go:168] "Request Body" body=""
	I1213 10:44:47.773830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:47.774174  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:48.273867  390588 type.go:168] "Request Body" body=""
	I1213 10:44:48.273936  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:48.274200  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:48.274241  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:48.773790  390588 type.go:168] "Request Body" body=""
	I1213 10:44:48.773871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:48.774229  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:49.273767  390588 type.go:168] "Request Body" body=""
	I1213 10:44:49.273849  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:49.274193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:49.774519  390588 type.go:168] "Request Body" body=""
	I1213 10:44:49.774595  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:49.774926  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:50.274705  390588 type.go:168] "Request Body" body=""
	I1213 10:44:50.274774  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:50.275102  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:50.275164  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:50.774065  390588 type.go:168] "Request Body" body=""
	I1213 10:44:50.774140  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:50.774471  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:51.274252  390588 type.go:168] "Request Body" body=""
	I1213 10:44:51.274326  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:51.274605  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:51.774340  390588 type.go:168] "Request Body" body=""
	I1213 10:44:51.774416  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:51.774757  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:52.274427  390588 type.go:168] "Request Body" body=""
	I1213 10:44:52.274511  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:52.274882  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:52.774600  390588 type.go:168] "Request Body" body=""
	I1213 10:44:52.774673  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:52.774919  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:52.774958  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:53.274692  390588 type.go:168] "Request Body" body=""
	I1213 10:44:53.274773  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:53.275105  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:53.773804  390588 type.go:168] "Request Body" body=""
	I1213 10:44:53.773878  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:53.774208  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:54.273740  390588 type.go:168] "Request Body" body=""
	I1213 10:44:54.273826  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:54.274090  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:54.773755  390588 type.go:168] "Request Body" body=""
	I1213 10:44:54.773834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:54.774176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:55.273871  390588 type.go:168] "Request Body" body=""
	I1213 10:44:55.273946  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:55.274266  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:55.274336  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:55.773682  390588 type.go:168] "Request Body" body=""
	I1213 10:44:55.773752  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:55.773998  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:56.273698  390588 type.go:168] "Request Body" body=""
	I1213 10:44:56.273771  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:56.274097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:56.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:44:56.773832  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:56.774157  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:57.273838  390588 type.go:168] "Request Body" body=""
	I1213 10:44:57.273924  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:57.274176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:57.773806  390588 type.go:168] "Request Body" body=""
	I1213 10:44:57.773928  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:57.774296  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:57.774354  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:58.273798  390588 type.go:168] "Request Body" body=""
	I1213 10:44:58.273873  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:58.274218  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:58.774470  390588 type.go:168] "Request Body" body=""
	I1213 10:44:58.774560  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:58.774811  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:59.179566  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:44:59.239921  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:59.239971  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:59.240057  390588 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:44:59.247585  390588 out.go:179] * Enabled addons: 
	I1213 10:44:59.249608  390588 addons.go:530] duration metric: took 1m38.479812026s for enable addons: enabled=[]
	I1213 10:44:59.274157  390588 type.go:168] "Request Body" body=""
	I1213 10:44:59.274255  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:59.274564  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:59.774339  390588 type.go:168] "Request Body" body=""
	I1213 10:44:59.774421  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:59.774764  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:59.774833  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:00.278749  390588 type.go:168] "Request Body" body=""
	I1213 10:45:00.278833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:00.279163  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:00.774212  390588 type.go:168] "Request Body" body=""
	I1213 10:45:00.774297  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:00.774688  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:01.274508  390588 type.go:168] "Request Body" body=""
	I1213 10:45:01.274605  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:01.274894  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:01.774686  390588 type.go:168] "Request Body" body=""
	I1213 10:45:01.774765  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:01.775087  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:01.775143  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:02.273808  390588 type.go:168] "Request Body" body=""
	I1213 10:45:02.273892  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:02.274240  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:02.773795  390588 type.go:168] "Request Body" body=""
	I1213 10:45:02.773879  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:02.774138  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:03.273769  390588 type.go:168] "Request Body" body=""
	I1213 10:45:03.273860  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:03.274233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:03.773792  390588 type.go:168] "Request Body" body=""
	I1213 10:45:03.773881  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:03.774233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:04.273949  390588 type.go:168] "Request Body" body=""
	I1213 10:45:04.274036  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:04.274352  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:04.274418  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:04.773787  390588 type.go:168] "Request Body" body=""
	I1213 10:45:04.773869  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:04.774175  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:05.273787  390588 type.go:168] "Request Body" body=""
	I1213 10:45:05.273859  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:05.274192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:05.773881  390588 type.go:168] "Request Body" body=""
	I1213 10:45:05.773957  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:05.774210  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:06.273726  390588 type.go:168] "Request Body" body=""
	I1213 10:45:06.273802  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:06.274127  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:06.773770  390588 type.go:168] "Request Body" body=""
	I1213 10:45:06.773852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:06.774202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:06.774260  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:07.273760  390588 type.go:168] "Request Body" body=""
	I1213 10:45:07.273836  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:07.274400  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:07.773790  390588 type.go:168] "Request Body" body=""
	I1213 10:45:07.773866  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:07.774207  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:08.273795  390588 type.go:168] "Request Body" body=""
	I1213 10:45:08.273920  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:08.274303  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:08.773655  390588 type.go:168] "Request Body" body=""
	I1213 10:45:08.773725  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:08.773989  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:09.273678  390588 type.go:168] "Request Body" body=""
	I1213 10:45:09.273758  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:09.274098  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:09.274153  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:09.773807  390588 type.go:168] "Request Body" body=""
	I1213 10:45:09.773902  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:09.774222  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:10.273946  390588 type.go:168] "Request Body" body=""
	I1213 10:45:10.274017  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:10.274269  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:10.774276  390588 type.go:168] "Request Body" body=""
	I1213 10:45:10.774349  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:10.774733  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:11.274712  390588 type.go:168] "Request Body" body=""
	I1213 10:45:11.274783  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:11.275094  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:11.275143  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:11.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:45:11.773801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:11.774126  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:12.273826  390588 type.go:168] "Request Body" body=""
	I1213 10:45:12.273930  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:12.274257  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:12.773940  390588 type.go:168] "Request Body" body=""
	I1213 10:45:12.774025  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:12.774370  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:13.273711  390588 type.go:168] "Request Body" body=""
	I1213 10:45:13.273799  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:13.274065  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:13.773788  390588 type.go:168] "Request Body" body=""
	I1213 10:45:13.773869  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:13.774187  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:13.774240  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:14.273793  390588 type.go:168] "Request Body" body=""
	I1213 10:45:14.273953  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:14.274293  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:14.773991  390588 type.go:168] "Request Body" body=""
	I1213 10:45:14.774073  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:14.774396  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:15.273772  390588 type.go:168] "Request Body" body=""
	I1213 10:45:15.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:15.274164  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:15.773820  390588 type.go:168] "Request Body" body=""
	I1213 10:45:15.773895  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:15.774219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:15.774280  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:16.274172  390588 type.go:168] "Request Body" body=""
	I1213 10:45:16.274247  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:16.280111  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1213 10:45:16.773739  390588 type.go:168] "Request Body" body=""
	I1213 10:45:16.773818  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:16.774141  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:17.273780  390588 type.go:168] "Request Body" body=""
	I1213 10:45:17.273862  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:17.274194  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:17.773721  390588 type.go:168] "Request Body" body=""
	I1213 10:45:17.773798  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:17.774048  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:18.273782  390588 type.go:168] "Request Body" body=""
	I1213 10:45:18.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:18.274213  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:18.274286  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:18.773986  390588 type.go:168] "Request Body" body=""
	I1213 10:45:18.774078  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:18.774398  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:19.273725  390588 type.go:168] "Request Body" body=""
	I1213 10:45:19.273802  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:19.274082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:19.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:45:19.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:19.774130  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:20.274061  390588 type.go:168] "Request Body" body=""
	I1213 10:45:20.274147  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:20.274521  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:20.274567  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:20.774429  390588 type.go:168] "Request Body" body=""
	I1213 10:45:20.774513  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:20.774784  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:21.274708  390588 type.go:168] "Request Body" body=""
	I1213 10:45:21.274788  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:21.275140  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:21.773809  390588 type.go:168] "Request Body" body=""
	I1213 10:45:21.773886  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:21.774230  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:22.273923  390588 type.go:168] "Request Body" body=""
	I1213 10:45:22.273995  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:22.274330  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:22.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:45:22.773836  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:22.774196  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:22.774266  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:23.273752  390588 type.go:168] "Request Body" body=""
	I1213 10:45:23.273825  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:23.274153  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:23.773854  390588 type.go:168] "Request Body" body=""
	I1213 10:45:23.773925  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:23.774184  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:24.273760  390588 type.go:168] "Request Body" body=""
	I1213 10:45:24.273837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:24.274228  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:24.773775  390588 type.go:168] "Request Body" body=""
	I1213 10:45:24.773852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:24.774188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:25.273932  390588 type.go:168] "Request Body" body=""
	I1213 10:45:25.274007  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:25.274270  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:25.274311  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:25.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:45:25.773835  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:25.774178  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:26.273929  390588 type.go:168] "Request Body" body=""
	I1213 10:45:26.274023  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:26.274342  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:26.774676  390588 type.go:168] "Request Body" body=""
	I1213 10:45:26.774744  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:26.774995  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:27.273699  390588 type.go:168] "Request Body" body=""
	I1213 10:45:27.273783  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:27.274109  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:27.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:45:27.773826  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:27.774163  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:27.774227  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:28.273715  390588 type.go:168] "Request Body" body=""
	I1213 10:45:28.273788  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:28.274057  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:28.773741  390588 type.go:168] "Request Body" body=""
	I1213 10:45:28.773816  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:28.774148  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:29.273858  390588 type.go:168] "Request Body" body=""
	I1213 10:45:29.273934  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:29.274250  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:29.773725  390588 type.go:168] "Request Body" body=""
	I1213 10:45:29.773794  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:29.774055  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:30.273773  390588 type.go:168] "Request Body" body=""
	I1213 10:45:30.273852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:30.274199  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:30.274260  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:30.774238  390588 type.go:168] "Request Body" body=""
	I1213 10:45:30.774312  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:30.774643  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:31.274550  390588 type.go:168] "Request Body" body=""
	I1213 10:45:31.274624  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:31.274882  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:31.774665  390588 type.go:168] "Request Body" body=""
	I1213 10:45:31.774738  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:31.775064  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:32.273753  390588 type.go:168] "Request Body" body=""
	I1213 10:45:32.273830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:32.274149  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:32.773762  390588 type.go:168] "Request Body" body=""
	I1213 10:45:32.773830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:32.774109  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:32.774151  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:33.273762  390588 type.go:168] "Request Body" body=""
	I1213 10:45:33.273841  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:33.274135  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:33.773816  390588 type.go:168] "Request Body" body=""
	I1213 10:45:33.773892  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:33.774227  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:34.274572  390588 type.go:168] "Request Body" body=""
	I1213 10:45:34.274643  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:34.274903  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:34.774657  390588 type.go:168] "Request Body" body=""
	I1213 10:45:34.774729  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:34.775082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:34.775152  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:35.273670  390588 type.go:168] "Request Body" body=""
	I1213 10:45:35.273759  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:35.274117  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:35.774407  390588 type.go:168] "Request Body" body=""
	I1213 10:45:35.774479  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:35.774771  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:36.274663  390588 type.go:168] "Request Body" body=""
	I1213 10:45:36.274756  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:36.275065  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:36.773806  390588 type.go:168] "Request Body" body=""
	I1213 10:45:36.773912  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:36.774265  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:37.273706  390588 type.go:168] "Request Body" body=""
	I1213 10:45:37.273778  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:37.274054  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:37.274104  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:37.773740  390588 type.go:168] "Request Body" body=""
	I1213 10:45:37.773842  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:37.774182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:38.273888  390588 type.go:168] "Request Body" body=""
	I1213 10:45:38.273961  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:38.274293  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:38.773975  390588 type.go:168] "Request Body" body=""
	I1213 10:45:38.774042  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:38.774302  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:39.273778  390588 type.go:168] "Request Body" body=""
	I1213 10:45:39.273861  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:39.274199  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:39.274262  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:39.773743  390588 type.go:168] "Request Body" body=""
	I1213 10:45:39.773824  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:39.774184  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:40.273728  390588 type.go:168] "Request Body" body=""
	I1213 10:45:40.273827  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:40.274144  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:40.774643  390588 type.go:168] "Request Body" body=""
	I1213 10:45:40.774717  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:40.775033  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:41.273691  390588 type.go:168] "Request Body" body=""
	I1213 10:45:41.273765  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:41.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:41.774405  390588 type.go:168] "Request Body" body=""
	I1213 10:45:41.774475  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:41.774789  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:41.774848  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:42.274590  390588 type.go:168] "Request Body" body=""
	I1213 10:45:42.274665  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:42.275006  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:42.773699  390588 type.go:168] "Request Body" body=""
	I1213 10:45:42.773775  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:42.774116  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:43.274417  390588 type.go:168] "Request Body" body=""
	I1213 10:45:43.274505  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:43.274764  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:43.774491  390588 type.go:168] "Request Body" body=""
	I1213 10:45:43.774561  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:43.774931  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:43.774985  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:44.274631  390588 type.go:168] "Request Body" body=""
	I1213 10:45:44.274716  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:44.275082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:44.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:45:44.773832  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:44.774086  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:45.273789  390588 type.go:168] "Request Body" body=""
	I1213 10:45:45.273877  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:45.274215  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:45.773938  390588 type.go:168] "Request Body" body=""
	I1213 10:45:45.774016  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:45.774370  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:46.274211  390588 type.go:168] "Request Body" body=""
	I1213 10:45:46.274311  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:46.274593  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:46.274641  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:46.774347  390588 type.go:168] "Request Body" body=""
	I1213 10:45:46.774423  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:46.774786  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:47.274591  390588 type.go:168] "Request Body" body=""
	I1213 10:45:47.274695  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:47.275064  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:47.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:45:47.773821  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:47.774076  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:48.273791  390588 type.go:168] "Request Body" body=""
	I1213 10:45:48.273871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:48.274221  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:48.773944  390588 type.go:168] "Request Body" body=""
	I1213 10:45:48.774025  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:48.774340  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:48.774398  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:49.273717  390588 type.go:168] "Request Body" body=""
	I1213 10:45:49.273796  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:49.274115  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:49.773760  390588 type.go:168] "Request Body" body=""
	I1213 10:45:49.773837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:49.774152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:50.273796  390588 type.go:168] "Request Body" body=""
	I1213 10:45:50.273881  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:50.274202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:50.774153  390588 type.go:168] "Request Body" body=""
	I1213 10:45:50.774227  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:50.774498  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:50.774547  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:51.274578  390588 type.go:168] "Request Body" body=""
	I1213 10:45:51.274657  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:51.274980  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:51.773696  390588 type.go:168] "Request Body" body=""
	I1213 10:45:51.773772  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:51.774097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:52.273712  390588 type.go:168] "Request Body" body=""
	I1213 10:45:52.273783  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:52.274044  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:52.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:45:52.773841  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:52.774214  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:53.273940  390588 type.go:168] "Request Body" body=""
	I1213 10:45:53.274028  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:53.274362  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:53.274420  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:53.773716  390588 type.go:168] "Request Body" body=""
	I1213 10:45:53.773788  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:53.774109  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:54.273804  390588 type.go:168] "Request Body" body=""
	I1213 10:45:54.273880  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:54.274211  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:54.773918  390588 type.go:168] "Request Body" body=""
	I1213 10:45:54.773996  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:54.774325  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:55.273749  390588 type.go:168] "Request Body" body=""
	I1213 10:45:55.273858  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:55.274197  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:55.773750  390588 type.go:168] "Request Body" body=""
	I1213 10:45:55.773829  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:55.774176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:55.774229  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:56.273954  390588 type.go:168] "Request Body" body=""
	I1213 10:45:56.274030  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:56.274368  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:56.774597  390588 type.go:168] "Request Body" body=""
	I1213 10:45:56.774681  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:56.775019  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:57.273757  390588 type.go:168] "Request Body" body=""
	I1213 10:45:57.273833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:57.274167  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:57.773886  390588 type.go:168] "Request Body" body=""
	I1213 10:45:57.773969  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:57.774297  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:57.774351  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:58.274008  390588 type.go:168] "Request Body" body=""
	I1213 10:45:58.274074  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:58.274328  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:58.773754  390588 type.go:168] "Request Body" body=""
	I1213 10:45:58.773845  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:58.774179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:59.273755  390588 type.go:168] "Request Body" body=""
	I1213 10:45:59.273831  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:59.274152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:59.773661  390588 type.go:168] "Request Body" body=""
	I1213 10:45:59.773729  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:59.773978  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:00.273779  390588 type.go:168] "Request Body" body=""
	I1213 10:46:00.273870  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:00.274207  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:00.274265  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:00.774194  390588 type.go:168] "Request Body" body=""
	I1213 10:46:00.774271  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:00.774577  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:01.274425  390588 type.go:168] "Request Body" body=""
	I1213 10:46:01.274499  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:01.274770  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:01.774648  390588 type.go:168] "Request Body" body=""
	I1213 10:46:01.774734  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:01.775108  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:02.273787  390588 type.go:168] "Request Body" body=""
	I1213 10:46:02.273866  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:02.274202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:02.773686  390588 type.go:168] "Request Body" body=""
	I1213 10:46:02.773753  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:02.774020  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:02.774062  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:03.273812  390588 type.go:168] "Request Body" body=""
	I1213 10:46:03.273890  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:03.274214  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:03.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:46:03.773844  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:03.774182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:04.274309  390588 type.go:168] "Request Body" body=""
	I1213 10:46:04.274379  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:04.274657  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:04.774430  390588 type.go:168] "Request Body" body=""
	I1213 10:46:04.774509  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:04.774864  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:04.774924  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:05.274540  390588 type.go:168] "Request Body" body=""
	I1213 10:46:05.274616  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:05.274963  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:05.773676  390588 type.go:168] "Request Body" body=""
	I1213 10:46:05.773758  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:05.774085  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:06.273969  390588 type.go:168] "Request Body" body=""
	I1213 10:46:06.274052  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:06.274459  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:06.773811  390588 type.go:168] "Request Body" body=""
	I1213 10:46:06.773902  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:06.774273  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:07.274619  390588 type.go:168] "Request Body" body=""
	I1213 10:46:07.274708  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:07.274974  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:07.275017  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:07.773671  390588 type.go:168] "Request Body" body=""
	I1213 10:46:07.773768  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:07.774117  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:08.273847  390588 type.go:168] "Request Body" body=""
	I1213 10:46:08.273925  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:08.274261  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:08.773957  390588 type.go:168] "Request Body" body=""
	I1213 10:46:08.774035  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:08.774397  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:09.273804  390588 type.go:168] "Request Body" body=""
	I1213 10:46:09.273894  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:09.274256  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:09.773968  390588 type.go:168] "Request Body" body=""
	I1213 10:46:09.774044  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:09.774403  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:09.774460  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:10.273719  390588 type.go:168] "Request Body" body=""
	I1213 10:46:10.273805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:10.274080  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:10.774136  390588 type.go:168] "Request Body" body=""
	I1213 10:46:10.774210  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:10.774536  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:11.274519  390588 type.go:168] "Request Body" body=""
	I1213 10:46:11.274594  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:11.274918  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:11.774397  390588 type.go:168] "Request Body" body=""
	I1213 10:46:11.774468  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:11.774832  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:11.774891  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:12.274659  390588 type.go:168] "Request Body" body=""
	I1213 10:46:12.274757  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:12.275082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:12.773782  390588 type.go:168] "Request Body" body=""
	I1213 10:46:12.773863  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:12.774233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:13.273921  390588 type.go:168] "Request Body" body=""
	I1213 10:46:13.273994  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:13.274258  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:13.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:46:13.773843  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:13.774234  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:14.273963  390588 type.go:168] "Request Body" body=""
	I1213 10:46:14.274066  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:14.274415  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:14.274474  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:14.773715  390588 type.go:168] "Request Body" body=""
	I1213 10:46:14.773793  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:14.774125  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:15.273806  390588 type.go:168] "Request Body" body=""
	I1213 10:46:15.273885  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:15.274220  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:15.773837  390588 type.go:168] "Request Body" body=""
	I1213 10:46:15.773921  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:15.774333  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:16.274096  390588 type.go:168] "Request Body" body=""
	I1213 10:46:16.274165  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:16.274517  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:16.274565  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:16.774276  390588 type.go:168] "Request Body" body=""
	I1213 10:46:16.774356  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:16.774701  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:17.274489  390588 type.go:168] "Request Body" body=""
	I1213 10:46:17.274563  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:17.274929  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:17.773641  390588 type.go:168] "Request Body" body=""
	I1213 10:46:17.773710  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:17.773957  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:18.274732  390588 type.go:168] "Request Body" body=""
	I1213 10:46:18.274812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:18.275153  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:18.275207  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:18.773906  390588 type.go:168] "Request Body" body=""
	I1213 10:46:18.773982  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:18.774326  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:19.274430  390588 type.go:168] "Request Body" body=""
	I1213 10:46:19.274528  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:19.274794  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:19.774601  390588 type.go:168] "Request Body" body=""
	I1213 10:46:19.774671  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:19.775003  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:20.273724  390588 type.go:168] "Request Body" body=""
	I1213 10:46:20.273806  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:20.274129  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:20.774125  390588 type.go:168] "Request Body" body=""
	I1213 10:46:20.774196  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:20.774577  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:20.774628  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:21.274424  390588 type.go:168] "Request Body" body=""
	I1213 10:46:21.274514  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:21.274834  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:21.774531  390588 type.go:168] "Request Body" body=""
	I1213 10:46:21.774612  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:21.774944  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:22.274640  390588 type.go:168] "Request Body" body=""
	I1213 10:46:22.274709  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:22.275021  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:22.774663  390588 type.go:168] "Request Body" body=""
	I1213 10:46:22.774773  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:22.775134  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:22.775197  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:23.273890  390588 type.go:168] "Request Body" body=""
	I1213 10:46:23.273971  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:23.274309  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:23.773717  390588 type.go:168] "Request Body" body=""
	I1213 10:46:23.773786  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:23.774083  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:24.273734  390588 type.go:168] "Request Body" body=""
	I1213 10:46:24.273813  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:24.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:24.773781  390588 type.go:168] "Request Body" body=""
	I1213 10:46:24.773855  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:24.774193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:25.274593  390588 type.go:168] "Request Body" body=""
	I1213 10:46:25.274667  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:25.274932  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:25.274974  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:25.773688  390588 type.go:168] "Request Body" body=""
	I1213 10:46:25.773769  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:25.774103  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:26.273715  390588 type.go:168] "Request Body" body=""
	I1213 10:46:26.273799  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:26.274187  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:26.773723  390588 type.go:168] "Request Body" body=""
	I1213 10:46:26.773803  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:26.774134  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:27.273777  390588 type.go:168] "Request Body" body=""
	I1213 10:46:27.273856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:27.274211  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:27.773942  390588 type.go:168] "Request Body" body=""
	I1213 10:46:27.774024  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:27.774376  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:27.774430  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:28.274709  390588 type.go:168] "Request Body" body=""
	I1213 10:46:28.274789  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:28.275064  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:28.773835  390588 type.go:168] "Request Body" body=""
	I1213 10:46:28.773920  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:28.774272  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:29.273759  390588 type.go:168] "Request Body" body=""
	I1213 10:46:29.273840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:29.274176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:29.774348  390588 type.go:168] "Request Body" body=""
	I1213 10:46:29.774419  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:29.774764  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:29.774820  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:30.274620  390588 type.go:168] "Request Body" body=""
	I1213 10:46:30.274696  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:30.275046  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:30.774640  390588 type.go:168] "Request Body" body=""
	I1213 10:46:30.774719  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:30.775077  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:31.273951  390588 type.go:168] "Request Body" body=""
	I1213 10:46:31.274026  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:31.274287  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:31.773775  390588 type.go:168] "Request Body" body=""
	I1213 10:46:31.773856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:31.774181  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:32.273795  390588 type.go:168] "Request Body" body=""
	I1213 10:46:32.273869  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:32.274211  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:32.274272  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:32.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:46:32.773801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:32.774050  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:33.273763  390588 type.go:168] "Request Body" body=""
	I1213 10:46:33.273841  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:33.274191  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:33.773932  390588 type.go:168] "Request Body" body=""
	I1213 10:46:33.774017  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:33.774448  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:34.273707  390588 type.go:168] "Request Body" body=""
	I1213 10:46:34.273777  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:34.274033  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:34.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:46:34.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:34.774164  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:34.774219  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:35.273760  390588 type.go:168] "Request Body" body=""
	I1213 10:46:35.273839  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:35.274188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:35.773757  390588 type.go:168] "Request Body" body=""
	I1213 10:46:35.773834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:35.774091  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:36.273704  390588 type.go:168] "Request Body" body=""
	I1213 10:46:36.273807  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:36.274146  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:36.773734  390588 type.go:168] "Request Body" body=""
	I1213 10:46:36.773812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:36.774138  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:37.273719  390588 type.go:168] "Request Body" body=""
	I1213 10:46:37.273806  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:37.274055  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:37.274109  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:37.773773  390588 type.go:168] "Request Body" body=""
	I1213 10:46:37.773850  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:37.774167  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:38.273869  390588 type.go:168] "Request Body" body=""
	I1213 10:46:38.273941  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:38.274257  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:38.774621  390588 type.go:168] "Request Body" body=""
	I1213 10:46:38.774711  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:38.774971  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:39.273720  390588 type.go:168] "Request Body" body=""
	I1213 10:46:39.273795  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:39.274130  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:39.274185  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:39.773882  390588 type.go:168] "Request Body" body=""
	I1213 10:46:39.773961  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:39.774280  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:40.273738  390588 type.go:168] "Request Body" body=""
	I1213 10:46:40.273832  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:40.274158  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:40.774749  390588 type.go:168] "Request Body" body=""
	I1213 10:46:40.774834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:40.775222  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:41.273940  390588 type.go:168] "Request Body" body=""
	I1213 10:46:41.274026  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:41.274347  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:41.274405  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:41.774636  390588 type.go:168] "Request Body" body=""
	I1213 10:46:41.774701  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:41.774952  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:42.273730  390588 type.go:168] "Request Body" body=""
	I1213 10:46:42.273828  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:42.274210  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:42.773953  390588 type.go:168] "Request Body" body=""
	I1213 10:46:42.774038  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:42.774405  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:43.274638  390588 type.go:168] "Request Body" body=""
	I1213 10:46:43.274705  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:43.274978  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:43.275016  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:43.773701  390588 type.go:168] "Request Body" body=""
	I1213 10:46:43.773806  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:43.774143  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:44.273888  390588 type.go:168] "Request Body" body=""
	I1213 10:46:44.273989  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:44.274363  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:44.774070  390588 type.go:168] "Request Body" body=""
	I1213 10:46:44.774138  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:44.774399  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:45.273823  390588 type.go:168] "Request Body" body=""
	I1213 10:46:45.273898  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:45.274268  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:45.773995  390588 type.go:168] "Request Body" body=""
	I1213 10:46:45.774070  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:45.774394  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:45.774448  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:46.274246  390588 type.go:168] "Request Body" body=""
	I1213 10:46:46.274313  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:46.274596  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:46.774345  390588 type.go:168] "Request Body" body=""
	I1213 10:46:46.774417  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:46.774765  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:47.274423  390588 type.go:168] "Request Body" body=""
	I1213 10:46:47.274522  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:47.274846  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:47.774170  390588 type.go:168] "Request Body" body=""
	I1213 10:46:47.774241  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:47.774544  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:47.774600  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:48.274170  390588 type.go:168] "Request Body" body=""
	I1213 10:46:48.274257  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:48.274614  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:48.774460  390588 type.go:168] "Request Body" body=""
	I1213 10:46:48.774547  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:48.774903  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:49.274601  390588 type.go:168] "Request Body" body=""
	I1213 10:46:49.274681  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:49.274964  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:49.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:46:49.773817  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:49.774156  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:50.273855  390588 type.go:168] "Request Body" body=""
	I1213 10:46:50.273935  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:50.274285  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:50.274341  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:50.774135  390588 type.go:168] "Request Body" body=""
	I1213 10:46:50.774202  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:50.774454  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:51.274467  390588 type.go:168] "Request Body" body=""
	I1213 10:46:51.274552  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:51.274884  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:51.774669  390588 type.go:168] "Request Body" body=""
	I1213 10:46:51.774754  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:51.775052  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:52.273723  390588 type.go:168] "Request Body" body=""
	I1213 10:46:52.273794  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:52.274094  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:52.773761  390588 type.go:168] "Request Body" body=""
	I1213 10:46:52.773837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:52.774189  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:52.774245  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:53.273910  390588 type.go:168] "Request Body" body=""
	I1213 10:46:53.273985  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:53.274313  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:53.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:46:53.773801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:53.774114  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:54.273799  390588 type.go:168] "Request Body" body=""
	I1213 10:46:54.273883  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:54.274242  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:54.773831  390588 type.go:168] "Request Body" body=""
	I1213 10:46:54.773908  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:54.774273  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:54.774330  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:55.273935  390588 type.go:168] "Request Body" body=""
	I1213 10:46:55.274002  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:55.274280  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:55.773763  390588 type.go:168] "Request Body" body=""
	I1213 10:46:55.773841  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:55.774166  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:56.273719  390588 type.go:168] "Request Body" body=""
	I1213 10:46:56.273793  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:56.274128  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:56.774284  390588 type.go:168] "Request Body" body=""
	I1213 10:46:56.774353  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:56.774609  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:56.774649  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:57.274349  390588 type.go:168] "Request Body" body=""
	I1213 10:46:57.274429  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:57.274756  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:57.774568  390588 type.go:168] "Request Body" body=""
	I1213 10:46:57.774644  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:57.774981  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:58.274491  390588 type.go:168] "Request Body" body=""
	I1213 10:46:58.274570  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:58.274873  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:58.774677  390588 type.go:168] "Request Body" body=""
	I1213 10:46:58.774750  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:58.775093  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:58.775146  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:59.273671  390588 type.go:168] "Request Body" body=""
	I1213 10:46:59.273746  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:59.274092  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:59.773709  390588 type.go:168] "Request Body" body=""
	I1213 10:46:59.773787  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:59.774109  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:00.273858  390588 type.go:168] "Request Body" body=""
	I1213 10:47:00.273965  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:00.274284  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:00.774431  390588 type.go:168] "Request Body" body=""
	I1213 10:47:00.774530  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:00.774877  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:01.273680  390588 type.go:168] "Request Body" body=""
	I1213 10:47:01.273746  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:01.274056  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:01.274104  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:01.773802  390588 type.go:168] "Request Body" body=""
	I1213 10:47:01.773895  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:01.774231  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:02.273805  390588 type.go:168] "Request Body" body=""
	I1213 10:47:02.273883  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:02.274188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:02.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:47:02.773820  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:02.774149  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:03.273795  390588 type.go:168] "Request Body" body=""
	I1213 10:47:03.273876  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:03.274215  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:03.274268  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:03.773789  390588 type.go:168] "Request Body" body=""
	I1213 10:47:03.773879  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:03.774219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:04.274436  390588 type.go:168] "Request Body" body=""
	I1213 10:47:04.274533  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:04.274808  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:04.774597  390588 type.go:168] "Request Body" body=""
	I1213 10:47:04.774676  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:04.775027  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:05.273736  390588 type.go:168] "Request Body" body=""
	I1213 10:47:05.273815  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:05.274179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:05.773856  390588 type.go:168] "Request Body" body=""
	I1213 10:47:05.773934  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:05.774190  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:05.774242  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:06.273720  390588 type.go:168] "Request Body" body=""
	I1213 10:47:06.273796  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:06.274139  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:06.773856  390588 type.go:168] "Request Body" body=""
	I1213 10:47:06.773936  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:06.774268  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:07.274469  390588 type.go:168] "Request Body" body=""
	I1213 10:47:07.274550  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:07.274856  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:07.774641  390588 type.go:168] "Request Body" body=""
	I1213 10:47:07.774724  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:07.775047  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:07.775098  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:08.273769  390588 type.go:168] "Request Body" body=""
	I1213 10:47:08.273853  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:08.274179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:08.773674  390588 type.go:168] "Request Body" body=""
	I1213 10:47:08.773747  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:08.773993  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:09.273756  390588 type.go:168] "Request Body" body=""
	I1213 10:47:09.273885  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:09.274246  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:09.773763  390588 type.go:168] "Request Body" body=""
	I1213 10:47:09.773845  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:09.774186  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:10.274330  390588 type.go:168] "Request Body" body=""
	I1213 10:47:10.274409  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:10.274689  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:10.274730  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:10.774642  390588 type.go:168] "Request Body" body=""
	I1213 10:47:10.774724  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:10.775070  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:11.273743  390588 type.go:168] "Request Body" body=""
	I1213 10:47:11.273826  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:11.274166  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:11.773673  390588 type.go:168] "Request Body" body=""
	I1213 10:47:11.773751  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:11.774001  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:12.273773  390588 type.go:168] "Request Body" body=""
	I1213 10:47:12.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:12.274233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:12.773795  390588 type.go:168] "Request Body" body=""
	I1213 10:47:12.773878  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:12.774221  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:12.774276  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:13.273922  390588 type.go:168] "Request Body" body=""
	I1213 10:47:13.273993  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:13.274301  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:13.773767  390588 type.go:168] "Request Body" body=""
	I1213 10:47:13.773837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:13.774158  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:14.273877  390588 type.go:168] "Request Body" body=""
	I1213 10:47:14.273952  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:14.274297  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:14.773969  390588 type.go:168] "Request Body" body=""
	I1213 10:47:14.774038  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:14.774294  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:14.774335  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:15.273792  390588 type.go:168] "Request Body" body=""
	I1213 10:47:15.273867  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:15.274192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:15.773783  390588 type.go:168] "Request Body" body=""
	I1213 10:47:15.773859  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:15.774205  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:16.273875  390588 type.go:168] "Request Body" body=""
	I1213 10:47:16.273951  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:16.274219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:16.773783  390588 type.go:168] "Request Body" body=""
	I1213 10:47:16.773856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:16.775023  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1213 10:47:16.775086  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:17.273732  390588 type.go:168] "Request Body" body=""
	I1213 10:47:17.273805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:17.274097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:17.773664  390588 type.go:168] "Request Body" body=""
	I1213 10:47:17.773749  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:17.774040  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:18.273790  390588 type.go:168] "Request Body" body=""
	I1213 10:47:18.273880  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:18.274223  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:18.773754  390588 type.go:168] "Request Body" body=""
	I1213 10:47:18.773831  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:18.774146  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:19.273714  390588 type.go:168] "Request Body" body=""
	I1213 10:47:19.273784  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:19.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:19.274151  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:19.773784  390588 type.go:168] "Request Body" body=""
	I1213 10:47:19.773873  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:19.774244  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:20.273959  390588 type.go:168] "Request Body" body=""
	I1213 10:47:20.274044  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:20.274394  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:20.774250  390588 type.go:168] "Request Body" body=""
	I1213 10:47:20.774369  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:20.774676  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:21.274708  390588 type.go:168] "Request Body" body=""
	I1213 10:47:21.274781  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:21.275080  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:21.275128  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:21.773729  390588 type.go:168] "Request Body" body=""
	I1213 10:47:21.773812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:21.774174  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:22.273722  390588 type.go:168] "Request Body" body=""
	I1213 10:47:22.273821  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:22.274131  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:22.773835  390588 type.go:168] "Request Body" body=""
	I1213 10:47:22.773910  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:22.774224  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:23.273772  390588 type.go:168] "Request Body" body=""
	I1213 10:47:23.273864  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:23.274153  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:23.774583  390588 type.go:168] "Request Body" body=""
	I1213 10:47:23.774658  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:23.774922  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:23.774974  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:24.274727  390588 type.go:168] "Request Body" body=""
	I1213 10:47:24.274797  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:24.275112  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:24.773773  390588 type.go:168] "Request Body" body=""
	I1213 10:47:24.773868  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:24.774190  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:25.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:47:25.273794  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:25.274148  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:25.773763  390588 type.go:168] "Request Body" body=""
	I1213 10:47:25.773845  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:25.774201  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:26.273894  390588 type.go:168] "Request Body" body=""
	I1213 10:47:26.273970  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:26.274304  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:26.274358  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:26.773709  390588 type.go:168] "Request Body" body=""
	I1213 10:47:26.773784  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:26.774082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:27.273776  390588 type.go:168] "Request Body" body=""
	I1213 10:47:27.273856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:27.274198  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:27.773769  390588 type.go:168] "Request Body" body=""
	I1213 10:47:27.773862  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:27.774181  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:28.273908  390588 type.go:168] "Request Body" body=""
	I1213 10:47:28.273980  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:28.274246  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:28.773791  390588 type.go:168] "Request Body" body=""
	I1213 10:47:28.773871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:28.774221  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:28.774280  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:29.273783  390588 type.go:168] "Request Body" body=""
	I1213 10:47:29.273866  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:29.274195  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:29.773879  390588 type.go:168] "Request Body" body=""
	I1213 10:47:29.773954  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:29.774220  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:30.273792  390588 type.go:168] "Request Body" body=""
	I1213 10:47:30.273887  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:30.274239  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:30.774640  390588 type.go:168] "Request Body" body=""
	I1213 10:47:30.774719  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:30.775063  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:30.775117  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:31.273664  390588 type.go:168] "Request Body" body=""
	I1213 10:47:31.273730  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:31.273976  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:31.773680  390588 type.go:168] "Request Body" body=""
	I1213 10:47:31.773753  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:31.774074  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:32.273770  390588 type.go:168] "Request Body" body=""
	I1213 10:47:32.273856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:32.274200  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:32.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:47:32.773840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:32.774155  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:33.273743  390588 type.go:168] "Request Body" body=""
	I1213 10:47:33.273816  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:33.274165  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:33.274237  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:33.773778  390588 type.go:168] "Request Body" body=""
	I1213 10:47:33.773853  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:33.774193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:34.273877  390588 type.go:168] "Request Body" body=""
	I1213 10:47:34.273952  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:34.274209  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:34.773757  390588 type.go:168] "Request Body" body=""
	I1213 10:47:34.773829  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:34.774154  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:35.273734  390588 type.go:168] "Request Body" body=""
	I1213 10:47:35.273810  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:35.274170  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:35.773845  390588 type.go:168] "Request Body" body=""
	I1213 10:47:35.773920  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:35.774173  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:35.774222  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:36.273675  390588 type.go:168] "Request Body" body=""
	I1213 10:47:36.273750  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:36.274088  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:36.773810  390588 type.go:168] "Request Body" body=""
	I1213 10:47:36.773886  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:36.774215  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:37.273714  390588 type.go:168] "Request Body" body=""
	I1213 10:47:37.273797  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:37.274138  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:37.773767  390588 type.go:168] "Request Body" body=""
	I1213 10:47:37.773861  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:37.774225  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:37.774283  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:38.273949  390588 type.go:168] "Request Body" body=""
	I1213 10:47:38.274035  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:38.274379  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:38.774693  390588 type.go:168] "Request Body" body=""
	I1213 10:47:38.774771  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:38.775056  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:39.273772  390588 type.go:168] "Request Body" body=""
	I1213 10:47:39.273858  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:39.274236  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:39.773832  390588 type.go:168] "Request Body" body=""
	I1213 10:47:39.773906  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:39.774253  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:39.774308  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:40.274521  390588 type.go:168] "Request Body" body=""
	I1213 10:47:40.274596  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:40.274862  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:40.774685  390588 type.go:168] "Request Body" body=""
	I1213 10:47:40.774759  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:40.775099  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:41.273778  390588 type.go:168] "Request Body" body=""
	I1213 10:47:41.273854  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:41.274171  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:41.773727  390588 type.go:168] "Request Body" body=""
	I1213 10:47:41.773800  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:41.774113  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:42.273838  390588 type.go:168] "Request Body" body=""
	I1213 10:47:42.273925  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:42.274281  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:42.274339  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:42.773878  390588 type.go:168] "Request Body" body=""
	I1213 10:47:42.773968  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:42.774283  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:43.273946  390588 type.go:168] "Request Body" body=""
	I1213 10:47:43.274019  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:43.274334  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:43.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:47:43.773829  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:43.774150  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:44.273757  390588 type.go:168] "Request Body" body=""
	I1213 10:47:44.273838  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:44.274183  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:44.773782  390588 type.go:168] "Request Body" body=""
	I1213 10:47:44.773864  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:44.774198  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:44.774253  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:45.273924  390588 type.go:168] "Request Body" body=""
	I1213 10:47:45.274004  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:45.274419  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:45.773843  390588 type.go:168] "Request Body" body=""
	I1213 10:47:45.773923  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:45.774295  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:46.273961  390588 type.go:168] "Request Body" body=""
	I1213 10:47:46.274029  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:46.274287  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:46.773789  390588 type.go:168] "Request Body" body=""
	I1213 10:47:46.773869  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:46.774227  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:46.774283  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:47.273961  390588 type.go:168] "Request Body" body=""
	I1213 10:47:47.274043  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:47.274393  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:47.773713  390588 type.go:168] "Request Body" body=""
	I1213 10:47:47.773795  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:47.774076  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:48.273777  390588 type.go:168] "Request Body" body=""
	I1213 10:47:48.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:48.274213  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:48.773914  390588 type.go:168] "Request Body" body=""
	I1213 10:47:48.773990  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:48.774305  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:48.774364  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:49.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:47:49.273791  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:49.274082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:49.773785  390588 type.go:168] "Request Body" body=""
	I1213 10:47:49.773866  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:49.774184  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:50.273769  390588 type.go:168] "Request Body" body=""
	I1213 10:47:50.273849  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:50.274190  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:50.774233  390588 type.go:168] "Request Body" body=""
	I1213 10:47:50.774309  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:50.774588  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:50.774631  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:51.274650  390588 type.go:168] "Request Body" body=""
	I1213 10:47:51.274724  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:51.275059  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:51.773796  390588 type.go:168] "Request Body" body=""
	I1213 10:47:51.773878  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:51.774236  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:52.274456  390588 type.go:168] "Request Body" body=""
	I1213 10:47:52.274538  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:52.274799  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:52.774588  390588 type.go:168] "Request Body" body=""
	I1213 10:47:52.774666  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:52.775007  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:52.775061  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:53.273753  390588 type.go:168] "Request Body" body=""
	I1213 10:47:53.273833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:53.274191  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:53.773675  390588 type.go:168] "Request Body" body=""
	I1213 10:47:53.773745  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:53.774008  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:54.273722  390588 type.go:168] "Request Body" body=""
	I1213 10:47:54.273801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:54.274131  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:54.773868  390588 type.go:168] "Request Body" body=""
	I1213 10:47:54.773943  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:54.774296  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:55.273989  390588 type.go:168] "Request Body" body=""
	I1213 10:47:55.274065  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:55.274332  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:55.274372  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:55.774037  390588 type.go:168] "Request Body" body=""
	I1213 10:47:55.774114  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:55.774457  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:56.274294  390588 type.go:168] "Request Body" body=""
	I1213 10:47:56.274368  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:56.274696  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:56.774209  390588 type.go:168] "Request Body" body=""
	I1213 10:47:56.774284  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:56.774573  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:57.274365  390588 type.go:168] "Request Body" body=""
	I1213 10:47:57.274443  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:57.274796  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:57.274856  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:57.774615  390588 type.go:168] "Request Body" body=""
	I1213 10:47:57.774691  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:57.775029  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:58.274293  390588 type.go:168] "Request Body" body=""
	I1213 10:47:58.274363  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:58.274642  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:58.774411  390588 type.go:168] "Request Body" body=""
	I1213 10:47:58.774519  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:58.774841  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:59.274495  390588 type.go:168] "Request Body" body=""
	I1213 10:47:59.274571  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:59.274905  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:59.274961  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:59.774120  390588 type.go:168] "Request Body" body=""
	I1213 10:47:59.774186  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:59.774529  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:00.274587  390588 type.go:168] "Request Body" body=""
	I1213 10:48:00.274674  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:00.275002  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:00.773691  390588 type.go:168] "Request Body" body=""
	I1213 10:48:00.773785  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:00.774128  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:01.273694  390588 type.go:168] "Request Body" body=""
	I1213 10:48:01.273766  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:01.274084  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:01.773820  390588 type.go:168] "Request Body" body=""
	I1213 10:48:01.773905  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:01.774301  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:01.774362  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:02.273866  390588 type.go:168] "Request Body" body=""
	I1213 10:48:02.273943  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:02.274265  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:02.773719  390588 type.go:168] "Request Body" body=""
	I1213 10:48:02.773929  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:02.774221  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:03.273773  390588 type.go:168] "Request Body" body=""
	I1213 10:48:03.273855  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:03.274182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:03.773768  390588 type.go:168] "Request Body" body=""
	I1213 10:48:03.773848  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:03.774192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:04.274348  390588 type.go:168] "Request Body" body=""
	I1213 10:48:04.274421  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:04.274701  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:04.274747  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:04.774520  390588 type.go:168] "Request Body" body=""
	I1213 10:48:04.774598  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:04.774955  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:05.274625  390588 type.go:168] "Request Body" body=""
	I1213 10:48:05.274699  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:05.275061  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:05.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:48:05.773840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:05.774191  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:06.273741  390588 type.go:168] "Request Body" body=""
	I1213 10:48:06.273822  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:06.274167  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:06.773880  390588 type.go:168] "Request Body" body=""
	I1213 10:48:06.773956  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:06.774280  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:06.774339  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:07.273666  390588 type.go:168] "Request Body" body=""
	I1213 10:48:07.273739  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:07.274015  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:07.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:48:07.773867  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:07.774227  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:08.273802  390588 type.go:168] "Request Body" body=""
	I1213 10:48:08.273887  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:08.274253  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:08.774404  390588 type.go:168] "Request Body" body=""
	I1213 10:48:08.774472  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:08.774731  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:08.774771  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:09.274521  390588 type.go:168] "Request Body" body=""
	I1213 10:48:09.274602  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:09.274979  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:09.774731  390588 type.go:168] "Request Body" body=""
	I1213 10:48:09.774819  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:09.775148  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:10.274501  390588 type.go:168] "Request Body" body=""
	I1213 10:48:10.274577  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:10.274825  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:10.774685  390588 type.go:168] "Request Body" body=""
	I1213 10:48:10.774760  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:10.775071  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:10.775127  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:11.273657  390588 type.go:168] "Request Body" body=""
	I1213 10:48:11.273737  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:11.274080  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:11.774554  390588 type.go:168] "Request Body" body=""
	I1213 10:48:11.774619  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:11.774916  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:12.274606  390588 type.go:168] "Request Body" body=""
	I1213 10:48:12.274685  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:12.275008  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:12.773772  390588 type.go:168] "Request Body" body=""
	I1213 10:48:12.773849  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:12.774196  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:13.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:48:13.273790  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:13.274085  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:13.274132  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:13.773694  390588 type.go:168] "Request Body" body=""
	I1213 10:48:13.773768  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:13.774050  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:14.273699  390588 type.go:168] "Request Body" body=""
	I1213 10:48:14.273776  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:14.274097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:14.773688  390588 type.go:168] "Request Body" body=""
	I1213 10:48:14.773757  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:14.774016  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:15.273762  390588 type.go:168] "Request Body" body=""
	I1213 10:48:15.273837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:15.274160  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:15.274217  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:15.773796  390588 type.go:168] "Request Body" body=""
	I1213 10:48:15.773874  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:15.774220  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:16.273918  390588 type.go:168] "Request Body" body=""
	I1213 10:48:16.274004  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:16.274258  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:16.773913  390588 type.go:168] "Request Body" body=""
	I1213 10:48:16.773993  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:16.774333  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:17.273914  390588 type.go:168] "Request Body" body=""
	I1213 10:48:17.273989  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:17.274304  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:17.274360  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:17.773705  390588 type.go:168] "Request Body" body=""
	I1213 10:48:17.773779  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:17.774047  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:18.273778  390588 type.go:168] "Request Body" body=""
	I1213 10:48:18.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:18.274175  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:18.773780  390588 type.go:168] "Request Body" body=""
	I1213 10:48:18.773874  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:18.774242  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:19.274520  390588 type.go:168] "Request Body" body=""
	I1213 10:48:19.274589  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:19.274852  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:19.274893  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:19.774642  390588 type.go:168] "Request Body" body=""
	I1213 10:48:19.774722  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:19.775081  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:20.273688  390588 type.go:168] "Request Body" body=""
	I1213 10:48:20.273761  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:20.274090  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:20.773877  390588 type.go:168] "Request Body" body=""
	I1213 10:48:20.773951  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:20.774252  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:21.274225  390588 type.go:168] "Request Body" body=""
	I1213 10:48:21.274303  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:21.274658  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:21.774461  390588 type.go:168] "Request Body" body=""
	I1213 10:48:21.774542  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:21.774931  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:21.774990  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:22.273646  390588 type.go:168] "Request Body" body=""
	I1213 10:48:22.273719  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:22.273971  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:22.773678  390588 type.go:168] "Request Body" body=""
	I1213 10:48:22.773773  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:22.774157  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:23.273879  390588 type.go:168] "Request Body" body=""
	I1213 10:48:23.273951  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:23.274270  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:23.774466  390588 type.go:168] "Request Body" body=""
	I1213 10:48:23.774555  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:23.774828  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:24.274703  390588 type.go:168] "Request Body" body=""
	I1213 10:48:24.274778  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:24.275113  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:24.275166  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:24.773777  390588 type.go:168] "Request Body" body=""
	I1213 10:48:24.773853  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:24.774193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:25.273716  390588 type.go:168] "Request Body" body=""
	I1213 10:48:25.273787  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:25.274055  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:25.773749  390588 type.go:168] "Request Body" body=""
	I1213 10:48:25.773830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:25.774156  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:26.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:48:26.273812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:26.274134  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:26.774405  390588 type.go:168] "Request Body" body=""
	I1213 10:48:26.774477  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:26.774735  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:26.774777  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:27.274550  390588 type.go:168] "Request Body" body=""
	I1213 10:48:27.274638  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:27.274990  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:27.773699  390588 type.go:168] "Request Body" body=""
	I1213 10:48:27.773775  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:27.774125  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:28.274454  390588 type.go:168] "Request Body" body=""
	I1213 10:48:28.274531  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:28.274852  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:28.774642  390588 type.go:168] "Request Body" body=""
	I1213 10:48:28.774713  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:28.775023  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:28.775072  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:29.273765  390588 type.go:168] "Request Body" body=""
	I1213 10:48:29.273840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:29.274166  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:29.773685  390588 type.go:168] "Request Body" body=""
	I1213 10:48:29.773767  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:29.774067  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:30.273776  390588 type.go:168] "Request Body" body=""
	I1213 10:48:30.273858  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:30.274172  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:30.773721  390588 type.go:168] "Request Body" body=""
	I1213 10:48:30.773801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:30.774182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:31.273888  390588 type.go:168] "Request Body" body=""
	I1213 10:48:31.273960  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:31.274245  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:31.274287  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:31.773960  390588 type.go:168] "Request Body" body=""
	I1213 10:48:31.774033  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:31.774353  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:32.273790  390588 type.go:168] "Request Body" body=""
	I1213 10:48:32.273874  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:32.274212  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:32.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:48:32.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:32.774110  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:33.273773  390588 type.go:168] "Request Body" body=""
	I1213 10:48:33.273854  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:33.274167  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:33.773768  390588 type.go:168] "Request Body" body=""
	I1213 10:48:33.773850  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:33.774195  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:33.774250  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:34.274441  390588 type.go:168] "Request Body" body=""
	I1213 10:48:34.274551  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:34.274859  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:34.774530  390588 type.go:168] "Request Body" body=""
	I1213 10:48:34.774653  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:34.774994  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:35.273709  390588 type.go:168] "Request Body" body=""
	I1213 10:48:35.273790  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:35.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:35.773804  390588 type.go:168] "Request Body" body=""
	I1213 10:48:35.773871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:35.774121  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:36.273709  390588 type.go:168] "Request Body" body=""
	I1213 10:48:36.273787  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:36.274129  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:36.274191  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:36.773868  390588 type.go:168] "Request Body" body=""
	I1213 10:48:36.773953  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:36.774291  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:37.273713  390588 type.go:168] "Request Body" body=""
	I1213 10:48:37.273782  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:37.274052  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:37.773728  390588 type.go:168] "Request Body" body=""
	I1213 10:48:37.773807  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:37.774133  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:38.273695  390588 type.go:168] "Request Body" body=""
	I1213 10:48:38.273771  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:38.274096  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:38.774434  390588 type.go:168] "Request Body" body=""
	I1213 10:48:38.774523  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:38.774857  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:38.774915  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:39.274697  390588 type.go:168] "Request Body" body=""
	I1213 10:48:39.274775  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:39.275116  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:39.773799  390588 type.go:168] "Request Body" body=""
	I1213 10:48:39.773875  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:39.774219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:40.274392  390588 type.go:168] "Request Body" body=""
	I1213 10:48:40.274461  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:40.274778  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:40.774600  390588 type.go:168] "Request Body" body=""
	I1213 10:48:40.774675  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:40.774999  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:40.775056  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:41.273683  390588 type.go:168] "Request Body" body=""
	I1213 10:48:41.273758  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:41.274099  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:41.774223  390588 type.go:168] "Request Body" body=""
	I1213 10:48:41.774306  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:41.774579  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:42.274405  390588 type.go:168] "Request Body" body=""
	I1213 10:48:42.274535  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:42.274934  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:42.774574  390588 type.go:168] "Request Body" body=""
	I1213 10:48:42.774658  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:42.775003  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:43.273697  390588 type.go:168] "Request Body" body=""
	I1213 10:48:43.273772  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:43.274034  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:43.274076  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:43.773741  390588 type.go:168] "Request Body" body=""
	I1213 10:48:43.773825  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:43.774164  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:44.273866  390588 type.go:168] "Request Body" body=""
	I1213 10:48:44.273947  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:44.274284  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:44.773701  390588 type.go:168] "Request Body" body=""
	I1213 10:48:44.773793  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:44.774141  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:45.273825  390588 type.go:168] "Request Body" body=""
	I1213 10:48:45.273925  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:45.274348  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:45.274406  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:45.774078  390588 type.go:168] "Request Body" body=""
	I1213 10:48:45.774155  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:45.774567  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:46.274333  390588 type.go:168] "Request Body" body=""
	I1213 10:48:46.274401  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:46.274668  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:46.774394  390588 type.go:168] "Request Body" body=""
	I1213 10:48:46.774466  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:46.774810  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:47.274617  390588 type.go:168] "Request Body" body=""
	I1213 10:48:47.274705  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:47.275033  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:47.275083  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:47.774292  390588 type.go:168] "Request Body" body=""
	I1213 10:48:47.774364  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:47.774696  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:48.274508  390588 type.go:168] "Request Body" body=""
	I1213 10:48:48.274590  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:48.274935  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:48.774610  390588 type.go:168] "Request Body" body=""
	I1213 10:48:48.774685  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:48.775020  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:49.273715  390588 type.go:168] "Request Body" body=""
	I1213 10:48:49.273781  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:49.274042  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:49.773747  390588 type.go:168] "Request Body" body=""
	I1213 10:48:49.773829  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:49.774155  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:49.774228  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:50.273926  390588 type.go:168] "Request Body" body=""
	I1213 10:48:50.274002  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:50.274364  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:50.774202  390588 type.go:168] "Request Body" body=""
	I1213 10:48:50.774276  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:50.774536  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:51.274422  390588 type.go:168] "Request Body" body=""
	I1213 10:48:51.274498  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:51.274822  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:51.774623  390588 type.go:168] "Request Body" body=""
	I1213 10:48:51.774699  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:51.775050  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:51.775104  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:52.273779  390588 type.go:168] "Request Body" body=""
	I1213 10:48:52.273845  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:52.274097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:52.773759  390588 type.go:168] "Request Body" body=""
	I1213 10:48:52.773834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:52.774161  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:53.273848  390588 type.go:168] "Request Body" body=""
	I1213 10:48:53.273927  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:53.274265  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:53.773713  390588 type.go:168] "Request Body" body=""
	I1213 10:48:53.773788  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:53.774090  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:54.273763  390588 type.go:168] "Request Body" body=""
	I1213 10:48:54.273840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:54.274182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:54.274238  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:54.773755  390588 type.go:168] "Request Body" body=""
	I1213 10:48:54.773839  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:54.774143  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:55.273671  390588 type.go:168] "Request Body" body=""
	I1213 10:48:55.273739  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:55.273994  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:55.773662  390588 type.go:168] "Request Body" body=""
	I1213 10:48:55.773743  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:55.774113  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:56.274020  390588 type.go:168] "Request Body" body=""
	I1213 10:48:56.274092  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:56.274398  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:56.274455  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:56.773718  390588 type.go:168] "Request Body" body=""
	I1213 10:48:56.773786  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:56.774114  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:57.273796  390588 type.go:168] "Request Body" body=""
	I1213 10:48:57.273875  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:57.274202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:57.773898  390588 type.go:168] "Request Body" body=""
	I1213 10:48:57.773979  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:57.774308  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:58.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:48:58.273790  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:58.274114  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:58.773788  390588 type.go:168] "Request Body" body=""
	I1213 10:48:58.773908  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:58.774247  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:58.774302  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:59.273809  390588 type.go:168] "Request Body" body=""
	I1213 10:48:59.273892  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:59.274236  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:59.773708  390588 type.go:168] "Request Body" body=""
	I1213 10:48:59.773786  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:59.774102  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:00.273835  390588 type.go:168] "Request Body" body=""
	I1213 10:49:00.273945  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:00.274259  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:00.774386  390588 type.go:168] "Request Body" body=""
	I1213 10:49:00.774468  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:00.774788  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:00.774843  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:01.274715  390588 type.go:168] "Request Body" body=""
	I1213 10:49:01.274784  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:01.275080  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:01.773784  390588 type.go:168] "Request Body" body=""
	I1213 10:49:01.773863  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:01.774155  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:02.273798  390588 type.go:168] "Request Body" body=""
	I1213 10:49:02.273897  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:02.274252  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:02.773815  390588 type.go:168] "Request Body" body=""
	I1213 10:49:02.773883  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:02.774152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:03.273838  390588 type.go:168] "Request Body" body=""
	I1213 10:49:03.273923  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:03.274294  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:03.274348  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:03.773866  390588 type.go:168] "Request Body" body=""
	I1213 10:49:03.773946  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:03.774285  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:04.273977  390588 type.go:168] "Request Body" body=""
	I1213 10:49:04.274050  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:04.274314  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:04.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:49:04.773838  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:04.774178  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:05.273888  390588 type.go:168] "Request Body" body=""
	I1213 10:49:05.273962  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:05.274293  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:05.773962  390588 type.go:168] "Request Body" body=""
	I1213 10:49:05.774033  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:05.774279  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:05.774317  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:06.274277  390588 type.go:168] "Request Body" body=""
	I1213 10:49:06.274357  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:06.274684  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:06.774350  390588 type.go:168] "Request Body" body=""
	I1213 10:49:06.774429  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:06.774754  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:07.274072  390588 type.go:168] "Request Body" body=""
	I1213 10:49:07.274145  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:07.274401  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:07.773761  390588 type.go:168] "Request Body" body=""
	I1213 10:49:07.773839  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:07.774168  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:08.273771  390588 type.go:168] "Request Body" body=""
	I1213 10:49:08.273852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:08.274170  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:08.274229  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:08.773716  390588 type.go:168] "Request Body" body=""
	I1213 10:49:08.773793  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:08.774102  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:09.273765  390588 type.go:168] "Request Body" body=""
	I1213 10:49:09.273840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:09.274179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:09.773911  390588 type.go:168] "Request Body" body=""
	I1213 10:49:09.773987  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:09.774329  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:10.274643  390588 type.go:168] "Request Body" body=""
	I1213 10:49:10.274715  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:10.275018  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:10.275073  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:10.774631  390588 type.go:168] "Request Body" body=""
	I1213 10:49:10.774708  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:10.775082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:11.273712  390588 type.go:168] "Request Body" body=""
	I1213 10:49:11.273785  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:11.274118  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:11.773811  390588 type.go:168] "Request Body" body=""
	I1213 10:49:11.773881  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:11.774141  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:12.273785  390588 type.go:168] "Request Body" body=""
	I1213 10:49:12.273860  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:12.274192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:12.773779  390588 type.go:168] "Request Body" body=""
	I1213 10:49:12.773865  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:12.774208  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:12.774264  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:13.274414  390588 type.go:168] "Request Body" body=""
	I1213 10:49:13.274491  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:13.274806  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:13.774595  390588 type.go:168] "Request Body" body=""
	I1213 10:49:13.774673  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:13.775019  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:14.274700  390588 type.go:168] "Request Body" body=""
	I1213 10:49:14.274776  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:14.275122  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:14.773666  390588 type.go:168] "Request Body" body=""
	I1213 10:49:14.773732  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:14.773982  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:15.273683  390588 type.go:168] "Request Body" body=""
	I1213 10:49:15.273760  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:15.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:15.274153  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:15.773812  390588 type.go:168] "Request Body" body=""
	I1213 10:49:15.773895  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:15.774230  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:16.273920  390588 type.go:168] "Request Body" body=""
	I1213 10:49:16.273995  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:16.274253  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:16.773782  390588 type.go:168] "Request Body" body=""
	I1213 10:49:16.773868  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:16.774406  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:17.274090  390588 type.go:168] "Request Body" body=""
	I1213 10:49:17.274171  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:17.274528  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:17.274584  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:17.774247  390588 type.go:168] "Request Body" body=""
	I1213 10:49:17.774320  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:17.774585  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:18.274376  390588 type.go:168] "Request Body" body=""
	I1213 10:49:18.274452  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:18.274800  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:18.774498  390588 type.go:168] "Request Body" body=""
	I1213 10:49:18.774575  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:18.774922  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:19.274279  390588 type.go:168] "Request Body" body=""
	I1213 10:49:19.274351  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:19.274659  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:19.274729  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:19.774509  390588 type.go:168] "Request Body" body=""
	I1213 10:49:19.774592  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:19.774934  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:20.273655  390588 type.go:168] "Request Body" body=""
	I1213 10:49:20.273729  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:20.274058  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:20.773657  390588 type.go:168] "Request Body" body=""
	I1213 10:49:20.773723  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:20.773970  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:21.273725  390588 type.go:168] "Request Body" body=""
	I1213 10:49:21.273834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:21.274179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:21.773895  390588 type.go:168] "Request Body" body=""
	W1213 10:49:21.773963  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
	I1213 10:49:21.773982  390588 node_ready.go:38] duration metric: took 6m0.000438977s for node "functional-407525" to be "Ready" ...
	I1213 10:49:21.777070  390588 out.go:203] 
	W1213 10:49:21.779923  390588 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:49:21.779945  390588 out.go:285] * 
	* 
	W1213 10:49:21.782066  390588 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:49:21.784854  390588 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-407525 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.277068439s for "functional-407525" cluster.
I1213 10:49:22.419816  356328 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-407525
helpers_test.go:244: (dbg) docker inspect functional-407525:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	        "Created": "2025-12-13T10:34:59.162458661Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 385126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:34:59.230276401Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hostname",
	        "HostsPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hosts",
	        "LogPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7-json.log",
	        "Name": "/functional-407525",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-407525:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-407525",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	                "LowerDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-407525",
	                "Source": "/var/lib/docker/volumes/functional-407525/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-407525",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-407525",
	                "name.minikube.sigs.k8s.io": "functional-407525",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb8c72e3de62f4751cebe2c5a489ec3040a7f771c4c912b4414d5eb26c67d8e4",
	            "SandboxKey": "/var/run/docker/netns/fb8c72e3de62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-407525": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:c5:1d:c8:5d:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8bb3fce07852261971da0e26f4e28c90471b6da820443a0b657c0bf09d2f7042",
	                    "EndpointID": "3a907b06ccc449fc18f0cf71710374046514d7011757e3e81bb1c73b267fe8c9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-407525",
	                        "7fc3d6bd328a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525: exit status 2 (353.328872ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-407525 logs -n 25: (1.047672053s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-371413 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ ssh            │ functional-371413 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ ssh            │ functional-371413 ssh -- ls -la /mount-9p                                                                                                         │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ ssh            │ functional-371413 ssh sudo umount -f /mount-9p                                                                                                    │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ mount          │ -p functional-371413 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3564307197/001:/mount2 --alsologtostderr -v=1                                │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ mount          │ -p functional-371413 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3564307197/001:/mount1 --alsologtostderr -v=1                                │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ mount          │ -p functional-371413 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3564307197/001:/mount3 --alsologtostderr -v=1                                │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ ssh            │ functional-371413 ssh findmnt -T /mount1                                                                                                          │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ ssh            │ functional-371413 ssh findmnt -T /mount1                                                                                                          │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ ssh            │ functional-371413 ssh findmnt -T /mount2                                                                                                          │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ ssh            │ functional-371413 ssh findmnt -T /mount3                                                                                                          │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ mount          │ -p functional-371413 --kill=true                                                                                                                  │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ update-context │ functional-371413 update-context --alsologtostderr -v=2                                                                                           │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ update-context │ functional-371413 update-context --alsologtostderr -v=2                                                                                           │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ update-context │ functional-371413 update-context --alsologtostderr -v=2                                                                                           │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image          │ functional-371413 image ls --format short --alsologtostderr                                                                                       │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image          │ functional-371413 image ls --format yaml --alsologtostderr                                                                                        │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ ssh            │ functional-371413 ssh pgrep buildkitd                                                                                                             │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ image          │ functional-371413 image build -t localhost/my-image:functional-371413 testdata/build --alsologtostderr                                            │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image          │ functional-371413 image ls                                                                                                                        │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image          │ functional-371413 image ls --format json --alsologtostderr                                                                                        │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image          │ functional-371413 image ls --format table --alsologtostderr                                                                                       │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ delete         │ -p functional-371413                                                                                                                              │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ start          │ -p functional-407525 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ start          │ -p functional-407525 --alsologtostderr -v=8                                                                                                       │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:43 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:43:16
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:43:16.189245  390588 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:43:16.189385  390588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:43:16.189397  390588 out.go:374] Setting ErrFile to fd 2...
	I1213 10:43:16.189403  390588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:43:16.189684  390588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:43:16.190095  390588 out.go:368] Setting JSON to false
	I1213 10:43:16.190986  390588 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8749,"bootTime":1765613848,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:43:16.191060  390588 start.go:143] virtualization:  
	I1213 10:43:16.194511  390588 out.go:179] * [functional-407525] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:43:16.198204  390588 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:43:16.198321  390588 notify.go:221] Checking for updates...
	I1213 10:43:16.204163  390588 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:43:16.207088  390588 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:16.209934  390588 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 10:43:16.212863  390588 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:43:16.215711  390588 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:43:16.219166  390588 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:43:16.219330  390588 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:43:16.245531  390588 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:43:16.245660  390588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:43:16.304777  390588 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:43:16.295770012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:43:16.304888  390588 docker.go:319] overlay module found
	I1213 10:43:16.309644  390588 out.go:179] * Using the docker driver based on existing profile
	I1213 10:43:16.312430  390588 start.go:309] selected driver: docker
	I1213 10:43:16.312447  390588 start.go:927] validating driver "docker" against &{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:43:16.312556  390588 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:43:16.312654  390588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:43:16.369591  390588 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:43:16.360947105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:43:16.370024  390588 cni.go:84] Creating CNI manager for ""
	I1213 10:43:16.370077  390588 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:43:16.370130  390588 start.go:353] cluster config:
	{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:43:16.374951  390588 out.go:179] * Starting "functional-407525" primary control-plane node in "functional-407525" cluster
	I1213 10:43:16.377750  390588 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:43:16.380575  390588 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:43:16.383625  390588 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:43:16.383675  390588 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:43:16.383684  390588 cache.go:65] Caching tarball of preloaded images
	I1213 10:43:16.383721  390588 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:43:16.383768  390588 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:43:16.383779  390588 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 10:43:16.383909  390588 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/config.json ...
	I1213 10:43:16.402414  390588 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:43:16.402437  390588 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:43:16.402458  390588 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:43:16.402490  390588 start.go:360] acquireMachinesLock for functional-407525: {Name:mkb9a6ddeb0e93e626919e03dc3c989f045e07da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:43:16.402563  390588 start.go:364] duration metric: took 38.359µs to acquireMachinesLock for "functional-407525"
	I1213 10:43:16.402589  390588 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:43:16.402599  390588 fix.go:54] fixHost starting: 
	I1213 10:43:16.402860  390588 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:43:16.419664  390588 fix.go:112] recreateIfNeeded on functional-407525: state=Running err=<nil>
	W1213 10:43:16.419692  390588 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:43:16.423019  390588 out.go:252] * Updating the running docker "functional-407525" container ...
	I1213 10:43:16.423065  390588 machine.go:94] provisionDockerMachine start ...
	I1213 10:43:16.423166  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:16.440791  390588 main.go:143] libmachine: Using SSH client type: native
	I1213 10:43:16.441132  390588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:43:16.441147  390588 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:43:16.590928  390588 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-407525
	
	I1213 10:43:16.590952  390588 ubuntu.go:182] provisioning hostname "functional-407525"
	I1213 10:43:16.591012  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:16.608907  390588 main.go:143] libmachine: Using SSH client type: native
	I1213 10:43:16.609223  390588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:43:16.609243  390588 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-407525 && echo "functional-407525" | sudo tee /etc/hostname
	I1213 10:43:16.770512  390588 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-407525
	
	I1213 10:43:16.770629  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:16.791074  390588 main.go:143] libmachine: Using SSH client type: native
	I1213 10:43:16.791392  390588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:43:16.791418  390588 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-407525' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-407525/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-407525' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:43:16.939938  390588 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:43:16.939965  390588 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 10:43:16.940042  390588 ubuntu.go:190] setting up certificates
	I1213 10:43:16.940060  390588 provision.go:84] configureAuth start
	I1213 10:43:16.940146  390588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-407525
	I1213 10:43:16.959231  390588 provision.go:143] copyHostCerts
	I1213 10:43:16.959277  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 10:43:16.959321  390588 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 10:43:16.959334  390588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 10:43:16.959423  390588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 10:43:16.959550  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 10:43:16.959579  390588 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 10:43:16.959590  390588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 10:43:16.959624  390588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 10:43:16.959682  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 10:43:16.959708  390588 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 10:43:16.959712  390588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 10:43:16.959738  390588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 10:43:16.959842  390588 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.functional-407525 san=[127.0.0.1 192.168.49.2 functional-407525 localhost minikube]
	I1213 10:43:17.067458  390588 provision.go:177] copyRemoteCerts
	I1213 10:43:17.067620  390588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:43:17.067673  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.087609  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:17.191151  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 10:43:17.191266  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:43:17.208031  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 10:43:17.208139  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:43:17.224829  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 10:43:17.224888  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:43:17.242075  390588 provision.go:87] duration metric: took 301.967659ms to configureAuth
	I1213 10:43:17.242106  390588 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:43:17.242287  390588 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:43:17.242396  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.259726  390588 main.go:143] libmachine: Using SSH client type: native
	I1213 10:43:17.260059  390588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:43:17.260089  390588 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:43:17.589136  390588 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:43:17.589164  390588 machine.go:97] duration metric: took 1.166089785s to provisionDockerMachine
	I1213 10:43:17.589176  390588 start.go:293] postStartSetup for "functional-407525" (driver="docker")
	I1213 10:43:17.589189  390588 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:43:17.589251  390588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:43:17.589299  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.609214  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:17.715839  390588 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:43:17.719089  390588 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 10:43:17.719109  390588 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 10:43:17.719114  390588 command_runner.go:130] > VERSION_ID="12"
	I1213 10:43:17.719118  390588 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 10:43:17.719124  390588 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 10:43:17.719128  390588 command_runner.go:130] > ID=debian
	I1213 10:43:17.719139  390588 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 10:43:17.719147  390588 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 10:43:17.719152  390588 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 10:43:17.719195  390588 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:43:17.719216  390588 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:43:17.719233  390588 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 10:43:17.719286  390588 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 10:43:17.719370  390588 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 10:43:17.719381  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> /etc/ssl/certs/3563282.pem
	I1213 10:43:17.719455  390588 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts -> hosts in /etc/test/nested/copy/356328
	I1213 10:43:17.719463  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts -> /etc/test/nested/copy/356328/hosts
	I1213 10:43:17.719505  390588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/356328
	I1213 10:43:17.727090  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 10:43:17.744131  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts --> /etc/test/nested/copy/356328/hosts (40 bytes)
	I1213 10:43:17.760861  390588 start.go:296] duration metric: took 171.654498ms for postStartSetup
	I1213 10:43:17.760950  390588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:43:17.760996  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.777913  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:17.880295  390588 command_runner.go:130] > 14%
	I1213 10:43:17.880360  390588 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:43:17.884436  390588 command_runner.go:130] > 169G
	I1213 10:43:17.884867  390588 fix.go:56] duration metric: took 1.482264041s for fixHost
	I1213 10:43:17.884887  390588 start.go:83] releasing machines lock for "functional-407525", held for 1.482310261s
	I1213 10:43:17.884953  390588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-407525
	I1213 10:43:17.902293  390588 ssh_runner.go:195] Run: cat /version.json
	I1213 10:43:17.902324  390588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:43:17.902343  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.902383  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.922251  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:17.922884  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:18.027684  390588 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 10:43:18.027820  390588 ssh_runner.go:195] Run: systemctl --version
	I1213 10:43:18.121469  390588 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 10:43:18.124198  390588 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 10:43:18.124239  390588 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 10:43:18.124329  390588 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:43:18.162710  390588 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 10:43:18.167030  390588 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 10:43:18.167242  390588 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:43:18.167335  390588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:43:18.175207  390588 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:43:18.175230  390588 start.go:496] detecting cgroup driver to use...
	I1213 10:43:18.175264  390588 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:43:18.175320  390588 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:43:18.190633  390588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:43:18.203672  390588 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:43:18.203747  390588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:43:18.219163  390588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:43:18.232309  390588 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:43:18.357889  390588 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:43:18.493929  390588 docker.go:234] disabling docker service ...
	I1213 10:43:18.494052  390588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:43:18.509796  390588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:43:18.523416  390588 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:43:18.655317  390588 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:43:18.778247  390588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:43:18.791182  390588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:43:18.805083  390588 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1213 10:43:18.806588  390588 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:43:18.806679  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.815701  390588 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:43:18.815803  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.824913  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.834321  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.843170  390588 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:43:18.851373  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.860701  390588 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.869075  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.877860  390588 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:43:18.884514  390588 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 10:43:18.885462  390588 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:43:18.893210  390588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:43:19.009167  390588 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:43:19.185094  390588 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:43:19.185195  390588 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:43:19.189492  390588 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1213 10:43:19.189518  390588 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 10:43:19.189526  390588 command_runner.go:130] > Device: 0,72	Inode: 1638        Links: 1
	I1213 10:43:19.189541  390588 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:43:19.189566  390588 command_runner.go:130] > Access: 2025-12-13 10:43:19.120971949 +0000
	I1213 10:43:19.189581  390588 command_runner.go:130] > Modify: 2025-12-13 10:43:19.120971949 +0000
	I1213 10:43:19.189586  390588 command_runner.go:130] > Change: 2025-12-13 10:43:19.120971949 +0000
	I1213 10:43:19.189590  390588 command_runner.go:130] >  Birth: -
	I1213 10:43:19.190244  390588 start.go:564] Will wait 60s for crictl version
	I1213 10:43:19.190335  390588 ssh_runner.go:195] Run: which crictl
	I1213 10:43:19.193561  390588 command_runner.go:130] > /usr/local/bin/crictl
	I1213 10:43:19.194286  390588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:43:19.222711  390588 command_runner.go:130] > Version:  0.1.0
	I1213 10:43:19.222747  390588 command_runner.go:130] > RuntimeName:  cri-o
	I1213 10:43:19.222752  390588 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1213 10:43:19.222773  390588 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 10:43:19.225058  390588 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:43:19.225194  390588 ssh_runner.go:195] Run: crio --version
	I1213 10:43:19.255970  390588 command_runner.go:130] > crio version 1.34.3
	I1213 10:43:19.256013  390588 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 10:43:19.256019  390588 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 10:43:19.256025  390588 command_runner.go:130] >    GitTreeState:   dirty
	I1213 10:43:19.256044  390588 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 10:43:19.256051  390588 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 10:43:19.256078  390588 command_runner.go:130] >    Compiler:       gc
	I1213 10:43:19.256090  390588 command_runner.go:130] >    Platform:       linux/arm64
	I1213 10:43:19.256094  390588 command_runner.go:130] >    Linkmode:       static
	I1213 10:43:19.256098  390588 command_runner.go:130] >    BuildTags:
	I1213 10:43:19.256105  390588 command_runner.go:130] >      static
	I1213 10:43:19.256109  390588 command_runner.go:130] >      netgo
	I1213 10:43:19.256113  390588 command_runner.go:130] >      osusergo
	I1213 10:43:19.256117  390588 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 10:43:19.256123  390588 command_runner.go:130] >      seccomp
	I1213 10:43:19.256128  390588 command_runner.go:130] >      apparmor
	I1213 10:43:19.256131  390588 command_runner.go:130] >      selinux
	I1213 10:43:19.256136  390588 command_runner.go:130] >    LDFlags:          unknown
	I1213 10:43:19.256166  390588 command_runner.go:130] >    SeccompEnabled:   true
	I1213 10:43:19.256195  390588 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 10:43:19.258161  390588 ssh_runner.go:195] Run: crio --version
	I1213 10:43:19.285922  390588 command_runner.go:130] > crio version 1.34.3
	I1213 10:43:19.285950  390588 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 10:43:19.285964  390588 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 10:43:19.285970  390588 command_runner.go:130] >    GitTreeState:   dirty
	I1213 10:43:19.285975  390588 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 10:43:19.285999  390588 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 10:43:19.286010  390588 command_runner.go:130] >    Compiler:       gc
	I1213 10:43:19.286017  390588 command_runner.go:130] >    Platform:       linux/arm64
	I1213 10:43:19.286022  390588 command_runner.go:130] >    Linkmode:       static
	I1213 10:43:19.286028  390588 command_runner.go:130] >    BuildTags:
	I1213 10:43:19.286046  390588 command_runner.go:130] >      static
	I1213 10:43:19.286056  390588 command_runner.go:130] >      netgo
	I1213 10:43:19.286061  390588 command_runner.go:130] >      osusergo
	I1213 10:43:19.286075  390588 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 10:43:19.286093  390588 command_runner.go:130] >      seccomp
	I1213 10:43:19.286102  390588 command_runner.go:130] >      apparmor
	I1213 10:43:19.286108  390588 command_runner.go:130] >      selinux
	I1213 10:43:19.286132  390588 command_runner.go:130] >    LDFlags:          unknown
	I1213 10:43:19.286137  390588 command_runner.go:130] >    SeccompEnabled:   true
	I1213 10:43:19.286153  390588 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 10:43:19.291101  390588 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 10:43:19.293929  390588 cli_runner.go:164] Run: docker network inspect functional-407525 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:43:19.310541  390588 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:43:19.314437  390588 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 10:43:19.314776  390588 kubeadm.go:884] updating cluster {Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:43:19.314904  390588 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:43:19.314962  390588 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:43:19.346332  390588 command_runner.go:130] > {
	I1213 10:43:19.346357  390588 command_runner.go:130] >   "images":  [
	I1213 10:43:19.346361  390588 command_runner.go:130] >     {
	I1213 10:43:19.346369  390588 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:43:19.346374  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346380  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:43:19.346383  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346387  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346396  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 10:43:19.346404  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 10:43:19.346411  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346416  390588 command_runner.go:130] >       "size":  "111333938",
	I1213 10:43:19.346423  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346429  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346436  390588 command_runner.go:130] >     },
	I1213 10:43:19.346439  390588 command_runner.go:130] >     {
	I1213 10:43:19.346445  390588 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:43:19.346449  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346457  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:43:19.346467  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346472  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346480  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 10:43:19.346491  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:43:19.346494  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346508  390588 command_runner.go:130] >       "size":  "29037500",
	I1213 10:43:19.346518  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346525  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346531  390588 command_runner.go:130] >     },
	I1213 10:43:19.346535  390588 command_runner.go:130] >     {
	I1213 10:43:19.346541  390588 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:43:19.346548  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346553  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:43:19.346556  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346563  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346571  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 10:43:19.346582  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 10:43:19.346586  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346590  390588 command_runner.go:130] >       "size":  "74491780",
	I1213 10:43:19.346594  390588 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:43:19.346600  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346604  390588 command_runner.go:130] >     },
	I1213 10:43:19.346610  390588 command_runner.go:130] >     {
	I1213 10:43:19.346616  390588 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:43:19.346621  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346628  390588 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:43:19.346632  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346636  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346646  390588 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 10:43:19.346657  390588 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 10:43:19.346661  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346667  390588 command_runner.go:130] >       "size":  "60857170",
	I1213 10:43:19.346671  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.346675  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.346679  390588 command_runner.go:130] >       },
	I1213 10:43:19.346690  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346698  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346702  390588 command_runner.go:130] >     },
	I1213 10:43:19.346705  390588 command_runner.go:130] >     {
	I1213 10:43:19.346715  390588 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:43:19.346722  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346728  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:43:19.346731  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346736  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346745  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 10:43:19.346760  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 10:43:19.346764  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346768  390588 command_runner.go:130] >       "size":  "84949999",
	I1213 10:43:19.346775  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.346778  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.346782  390588 command_runner.go:130] >       },
	I1213 10:43:19.346786  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346796  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346799  390588 command_runner.go:130] >     },
	I1213 10:43:19.346802  390588 command_runner.go:130] >     {
	I1213 10:43:19.346811  390588 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:43:19.346818  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346824  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:43:19.346828  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346832  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346842  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 10:43:19.346851  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 10:43:19.346859  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346863  390588 command_runner.go:130] >       "size":  "72170325",
	I1213 10:43:19.346866  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.346870  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.346875  390588 command_runner.go:130] >       },
	I1213 10:43:19.346879  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346886  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346889  390588 command_runner.go:130] >     },
	I1213 10:43:19.346892  390588 command_runner.go:130] >     {
	I1213 10:43:19.346898  390588 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:43:19.346911  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346917  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:43:19.346923  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346927  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346934  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 10:43:19.346946  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:43:19.346950  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346954  390588 command_runner.go:130] >       "size":  "74106775",
	I1213 10:43:19.346958  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346964  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346967  390588 command_runner.go:130] >     },
	I1213 10:43:19.346970  390588 command_runner.go:130] >     {
	I1213 10:43:19.346977  390588 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:43:19.346984  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346990  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:43:19.346993  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346997  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.347007  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 10:43:19.347027  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 10:43:19.347034  390588 command_runner.go:130] >       ],
	I1213 10:43:19.347038  390588 command_runner.go:130] >       "size":  "49822549",
	I1213 10:43:19.347041  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.347045  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.347048  390588 command_runner.go:130] >       },
	I1213 10:43:19.347053  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.347058  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.347062  390588 command_runner.go:130] >     },
	I1213 10:43:19.347065  390588 command_runner.go:130] >     {
	I1213 10:43:19.347072  390588 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:43:19.347078  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.347083  390588 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:43:19.347087  390588 command_runner.go:130] >       ],
	I1213 10:43:19.347097  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.347109  390588 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 10:43:19.347120  390588 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 10:43:19.347124  390588 command_runner.go:130] >       ],
	I1213 10:43:19.347132  390588 command_runner.go:130] >       "size":  "519884",
	I1213 10:43:19.347135  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.347140  390588 command_runner.go:130] >         "value":  "65535"
	I1213 10:43:19.347145  390588 command_runner.go:130] >       },
	I1213 10:43:19.347149  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.347155  390588 command_runner.go:130] >       "pinned":  true
	I1213 10:43:19.347158  390588 command_runner.go:130] >     }
	I1213 10:43:19.347161  390588 command_runner.go:130] >   ]
	I1213 10:43:19.347164  390588 command_runner.go:130] > }
	I1213 10:43:19.347379  390588 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:43:19.347391  390588 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:43:19.347452  390588 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:43:19.372755  390588 command_runner.go:130] > {
	I1213 10:43:19.372774  390588 command_runner.go:130] >   "images":  [
	I1213 10:43:19.372779  390588 command_runner.go:130] >     {
	I1213 10:43:19.372788  390588 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:43:19.372792  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.372799  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:43:19.372803  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372807  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.372816  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 10:43:19.372824  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 10:43:19.372828  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372832  390588 command_runner.go:130] >       "size":  "111333938",
	I1213 10:43:19.372836  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.372851  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.372854  390588 command_runner.go:130] >     },
	I1213 10:43:19.372857  390588 command_runner.go:130] >     {
	I1213 10:43:19.372863  390588 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:43:19.372868  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.372873  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:43:19.372876  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372880  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.372889  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 10:43:19.372897  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:43:19.372900  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372904  390588 command_runner.go:130] >       "size":  "29037500",
	I1213 10:43:19.372908  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.372920  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.372924  390588 command_runner.go:130] >     },
	I1213 10:43:19.372927  390588 command_runner.go:130] >     {
	I1213 10:43:19.372934  390588 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:43:19.372938  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.372943  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:43:19.372947  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372950  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.372958  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 10:43:19.372966  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 10:43:19.372970  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372973  390588 command_runner.go:130] >       "size":  "74491780",
	I1213 10:43:19.372978  390588 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:43:19.372982  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.372985  390588 command_runner.go:130] >     },
	I1213 10:43:19.372988  390588 command_runner.go:130] >     {
	I1213 10:43:19.372994  390588 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:43:19.372998  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373002  390588 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:43:19.373007  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373011  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373018  390588 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 10:43:19.373025  390588 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 10:43:19.373029  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373033  390588 command_runner.go:130] >       "size":  "60857170",
	I1213 10:43:19.373036  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373040  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.373043  390588 command_runner.go:130] >       },
	I1213 10:43:19.373052  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373056  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373059  390588 command_runner.go:130] >     },
	I1213 10:43:19.373062  390588 command_runner.go:130] >     {
	I1213 10:43:19.373070  390588 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:43:19.373078  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373083  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:43:19.373087  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373090  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373098  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 10:43:19.373110  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 10:43:19.373114  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373118  390588 command_runner.go:130] >       "size":  "84949999",
	I1213 10:43:19.373122  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373126  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.373129  390588 command_runner.go:130] >       },
	I1213 10:43:19.373132  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373136  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373139  390588 command_runner.go:130] >     },
	I1213 10:43:19.373142  390588 command_runner.go:130] >     {
	I1213 10:43:19.373148  390588 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:43:19.373151  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373157  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:43:19.373161  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373164  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373172  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 10:43:19.373181  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 10:43:19.373184  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373188  390588 command_runner.go:130] >       "size":  "72170325",
	I1213 10:43:19.373191  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373195  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.373198  390588 command_runner.go:130] >       },
	I1213 10:43:19.373202  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373206  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373208  390588 command_runner.go:130] >     },
	I1213 10:43:19.373211  390588 command_runner.go:130] >     {
	I1213 10:43:19.373218  390588 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:43:19.373222  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373230  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:43:19.373234  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373238  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373246  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 10:43:19.373253  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:43:19.373256  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373260  390588 command_runner.go:130] >       "size":  "74106775",
	I1213 10:43:19.373263  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373267  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373270  390588 command_runner.go:130] >     },
	I1213 10:43:19.373273  390588 command_runner.go:130] >     {
	I1213 10:43:19.373279  390588 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:43:19.373283  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373288  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:43:19.373291  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373295  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373303  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 10:43:19.373321  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 10:43:19.373324  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373328  390588 command_runner.go:130] >       "size":  "49822549",
	I1213 10:43:19.373331  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373336  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.373339  390588 command_runner.go:130] >       },
	I1213 10:43:19.373343  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373346  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373349  390588 command_runner.go:130] >     },
	I1213 10:43:19.373352  390588 command_runner.go:130] >     {
	I1213 10:43:19.373359  390588 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:43:19.373362  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373367  390588 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:43:19.373372  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373376  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373383  390588 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 10:43:19.373394  390588 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 10:43:19.373398  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373402  390588 command_runner.go:130] >       "size":  "519884",
	I1213 10:43:19.373405  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373409  390588 command_runner.go:130] >         "value":  "65535"
	I1213 10:43:19.373412  390588 command_runner.go:130] >       },
	I1213 10:43:19.373419  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373422  390588 command_runner.go:130] >       "pinned":  true
	I1213 10:43:19.373426  390588 command_runner.go:130] >     }
	I1213 10:43:19.373428  390588 command_runner.go:130] >   ]
	I1213 10:43:19.373432  390588 command_runner.go:130] > }
	I1213 10:43:19.375861  390588 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:43:19.375885  390588 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:43:19.375894  390588 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 10:43:19.375988  390588 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-407525 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:43:19.376071  390588 ssh_runner.go:195] Run: crio config
	I1213 10:43:19.425743  390588 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1213 10:43:19.425768  390588 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1213 10:43:19.425775  390588 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1213 10:43:19.425779  390588 command_runner.go:130] > #
	I1213 10:43:19.425787  390588 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1213 10:43:19.425793  390588 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1213 10:43:19.425801  390588 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1213 10:43:19.425810  390588 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1213 10:43:19.425814  390588 command_runner.go:130] > # reload'.
	I1213 10:43:19.425821  390588 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1213 10:43:19.425828  390588 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1213 10:43:19.425838  390588 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1213 10:43:19.425844  390588 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1213 10:43:19.425847  390588 command_runner.go:130] > [crio]
	I1213 10:43:19.425854  390588 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1213 10:43:19.425862  390588 command_runner.go:130] > # containers images, in this directory.
	I1213 10:43:19.426591  390588 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1213 10:43:19.426608  390588 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1213 10:43:19.427294  390588 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1213 10:43:19.427313  390588 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1213 10:43:19.427819  390588 command_runner.go:130] > # imagestore = ""
	I1213 10:43:19.427842  390588 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1213 10:43:19.427850  390588 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1213 10:43:19.428482  390588 command_runner.go:130] > # storage_driver = "overlay"
	I1213 10:43:19.428503  390588 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1213 10:43:19.428511  390588 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1213 10:43:19.428824  390588 command_runner.go:130] > # storage_option = [
	I1213 10:43:19.429159  390588 command_runner.go:130] > # ]
	I1213 10:43:19.429181  390588 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1213 10:43:19.429189  390588 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1213 10:43:19.429811  390588 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1213 10:43:19.429832  390588 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1213 10:43:19.429847  390588 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1213 10:43:19.429857  390588 command_runner.go:130] > # always happen on a node reboot
	I1213 10:43:19.430483  390588 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1213 10:43:19.430528  390588 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1213 10:43:19.430541  390588 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1213 10:43:19.430547  390588 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1213 10:43:19.431051  390588 command_runner.go:130] > # version_file_persist = ""
	I1213 10:43:19.431076  390588 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1213 10:43:19.431086  390588 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1213 10:43:19.431716  390588 command_runner.go:130] > # internal_wipe = true
	I1213 10:43:19.431739  390588 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1213 10:43:19.431747  390588 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1213 10:43:19.432440  390588 command_runner.go:130] > # internal_repair = true
	I1213 10:43:19.432456  390588 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1213 10:43:19.432463  390588 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1213 10:43:19.432469  390588 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1213 10:43:19.432478  390588 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1213 10:43:19.432487  390588 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1213 10:43:19.432491  390588 command_runner.go:130] > [crio.api]
	I1213 10:43:19.432496  390588 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1213 10:43:19.432503  390588 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1213 10:43:19.432512  390588 command_runner.go:130] > # IP address on which the stream server will listen.
	I1213 10:43:19.432517  390588 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1213 10:43:19.432544  390588 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1213 10:43:19.432552  390588 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1213 10:43:19.432851  390588 command_runner.go:130] > # stream_port = "0"
	I1213 10:43:19.432867  390588 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1213 10:43:19.432873  390588 command_runner.go:130] > # stream_enable_tls = false
	I1213 10:43:19.432879  390588 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1213 10:43:19.432886  390588 command_runner.go:130] > # stream_idle_timeout = ""
	I1213 10:43:19.432897  390588 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1213 10:43:19.432906  390588 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1213 10:43:19.433090  390588 command_runner.go:130] > # stream_tls_cert = ""
	I1213 10:43:19.433111  390588 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1213 10:43:19.433117  390588 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1213 10:43:19.433335  390588 command_runner.go:130] > # stream_tls_key = ""
	I1213 10:43:19.433354  390588 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1213 10:43:19.433362  390588 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1213 10:43:19.433373  390588 command_runner.go:130] > # automatically pick up the changes.
	I1213 10:43:19.433389  390588 command_runner.go:130] > # stream_tls_ca = ""
	I1213 10:43:19.433408  390588 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 10:43:19.433419  390588 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1213 10:43:19.433428  390588 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 10:43:19.433678  390588 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1213 10:43:19.433694  390588 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1213 10:43:19.433701  390588 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1213 10:43:19.433705  390588 command_runner.go:130] > [crio.runtime]
	I1213 10:43:19.433711  390588 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1213 10:43:19.433719  390588 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1213 10:43:19.433726  390588 command_runner.go:130] > # "nofile=1024:2048"
	I1213 10:43:19.433733  390588 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1213 10:43:19.433737  390588 command_runner.go:130] > # default_ulimits = [
	I1213 10:43:19.433744  390588 command_runner.go:130] > # ]
	I1213 10:43:19.433751  390588 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1213 10:43:19.433758  390588 command_runner.go:130] > # no_pivot = false
	I1213 10:43:19.433764  390588 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1213 10:43:19.433771  390588 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1213 10:43:19.433778  390588 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1213 10:43:19.433785  390588 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1213 10:43:19.433790  390588 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1213 10:43:19.433797  390588 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 10:43:19.433949  390588 command_runner.go:130] > # conmon = ""
	I1213 10:43:19.433968  390588 command_runner.go:130] > # Cgroup setting for conmon
	I1213 10:43:19.433978  390588 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1213 10:43:19.434402  390588 command_runner.go:130] > conmon_cgroup = "pod"
	I1213 10:43:19.434425  390588 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1213 10:43:19.434435  390588 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1213 10:43:19.434446  390588 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 10:43:19.434453  390588 command_runner.go:130] > # conmon_env = [
	I1213 10:43:19.434466  390588 command_runner.go:130] > # ]
	I1213 10:43:19.434472  390588 command_runner.go:130] > # Additional environment variables to set for all the
	I1213 10:43:19.434478  390588 command_runner.go:130] > # containers. These are overridden if set in the
	I1213 10:43:19.434484  390588 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1213 10:43:19.434488  390588 command_runner.go:130] > # default_env = [
	I1213 10:43:19.434491  390588 command_runner.go:130] > # ]
	I1213 10:43:19.434497  390588 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1213 10:43:19.434515  390588 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1213 10:43:19.434525  390588 command_runner.go:130] > # selinux = false
	I1213 10:43:19.434535  390588 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1213 10:43:19.434543  390588 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1213 10:43:19.434555  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.434559  390588 command_runner.go:130] > # seccomp_profile = ""
	I1213 10:43:19.434565  390588 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1213 10:43:19.434570  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.434841  390588 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1213 10:43:19.434858  390588 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1213 10:43:19.434865  390588 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1213 10:43:19.434872  390588 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1213 10:43:19.434885  390588 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1213 10:43:19.434891  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.434896  390588 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1213 10:43:19.434902  390588 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1213 10:43:19.434908  390588 command_runner.go:130] > # the cgroup blockio controller.
	I1213 10:43:19.434913  390588 command_runner.go:130] > # blockio_config_file = ""
	I1213 10:43:19.434937  390588 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1213 10:43:19.434946  390588 command_runner.go:130] > # blockio parameters.
	I1213 10:43:19.434950  390588 command_runner.go:130] > # blockio_reload = false
	I1213 10:43:19.434957  390588 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1213 10:43:19.434961  390588 command_runner.go:130] > # irqbalance daemon.
	I1213 10:43:19.434966  390588 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1213 10:43:19.434972  390588 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1213 10:43:19.434982  390588 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1213 10:43:19.434992  390588 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1213 10:43:19.435365  390588 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1213 10:43:19.435381  390588 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1213 10:43:19.435387  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.435392  390588 command_runner.go:130] > # rdt_config_file = ""
	I1213 10:43:19.435398  390588 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1213 10:43:19.435404  390588 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1213 10:43:19.435411  390588 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1213 10:43:19.435584  390588 command_runner.go:130] > # separate_pull_cgroup = ""
	I1213 10:43:19.435601  390588 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1213 10:43:19.435608  390588 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1213 10:43:19.435617  390588 command_runner.go:130] > # will be added.
	I1213 10:43:19.436649  390588 command_runner.go:130] > # default_capabilities = [
	I1213 10:43:19.436661  390588 command_runner.go:130] > # 	"CHOWN",
	I1213 10:43:19.436665  390588 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1213 10:43:19.436669  390588 command_runner.go:130] > # 	"FSETID",
	I1213 10:43:19.436673  390588 command_runner.go:130] > # 	"FOWNER",
	I1213 10:43:19.436679  390588 command_runner.go:130] > # 	"SETGID",
	I1213 10:43:19.436683  390588 command_runner.go:130] > # 	"SETUID",
	I1213 10:43:19.436708  390588 command_runner.go:130] > # 	"SETPCAP",
	I1213 10:43:19.436718  390588 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1213 10:43:19.436722  390588 command_runner.go:130] > # 	"KILL",
	I1213 10:43:19.436725  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436737  390588 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1213 10:43:19.436744  390588 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1213 10:43:19.436749  390588 command_runner.go:130] > # add_inheritable_capabilities = false
	I1213 10:43:19.436759  390588 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1213 10:43:19.436773  390588 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 10:43:19.436777  390588 command_runner.go:130] > default_sysctls = [
	I1213 10:43:19.436788  390588 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1213 10:43:19.436794  390588 command_runner.go:130] > ]
	I1213 10:43:19.436799  390588 command_runner.go:130] > # List of devices on the host that a
	I1213 10:43:19.436806  390588 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1213 10:43:19.436813  390588 command_runner.go:130] > # allowed_devices = [
	I1213 10:43:19.436817  390588 command_runner.go:130] > # 	"/dev/fuse",
	I1213 10:43:19.436820  390588 command_runner.go:130] > # 	"/dev/net/tun",
	I1213 10:43:19.436823  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436828  390588 command_runner.go:130] > # List of additional devices. specified as
	I1213 10:43:19.436836  390588 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1213 10:43:19.436842  390588 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1213 10:43:19.436850  390588 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 10:43:19.436857  390588 command_runner.go:130] > # additional_devices = [
	I1213 10:43:19.436861  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436868  390588 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1213 10:43:19.436872  390588 command_runner.go:130] > # cdi_spec_dirs = [
	I1213 10:43:19.436878  390588 command_runner.go:130] > # 	"/etc/cdi",
	I1213 10:43:19.436882  390588 command_runner.go:130] > # 	"/var/run/cdi",
	I1213 10:43:19.436888  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436895  390588 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1213 10:43:19.436904  390588 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1213 10:43:19.436908  390588 command_runner.go:130] > # Defaults to false.
	I1213 10:43:19.436913  390588 command_runner.go:130] > # device_ownership_from_security_context = false
	I1213 10:43:19.436919  390588 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1213 10:43:19.436926  390588 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1213 10:43:19.436930  390588 command_runner.go:130] > # hooks_dir = [
	I1213 10:43:19.436936  390588 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1213 10:43:19.436942  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436948  390588 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1213 10:43:19.436964  390588 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1213 10:43:19.436969  390588 command_runner.go:130] > # its default mounts from the following two files:
	I1213 10:43:19.436973  390588 command_runner.go:130] > #
	I1213 10:43:19.436981  390588 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1213 10:43:19.436992  390588 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1213 10:43:19.437001  390588 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1213 10:43:19.437008  390588 command_runner.go:130] > #
	I1213 10:43:19.437022  390588 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1213 10:43:19.437029  390588 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1213 10:43:19.437035  390588 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1213 10:43:19.437044  390588 command_runner.go:130] > #      only add mounts it finds in this file.
	I1213 10:43:19.437047  390588 command_runner.go:130] > #
	I1213 10:43:19.437051  390588 command_runner.go:130] > # default_mounts_file = ""
	I1213 10:43:19.437059  390588 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1213 10:43:19.437068  390588 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1213 10:43:19.437072  390588 command_runner.go:130] > # pids_limit = -1
	I1213 10:43:19.437078  390588 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1213 10:43:19.437087  390588 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1213 10:43:19.437094  390588 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1213 10:43:19.437104  390588 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1213 10:43:19.437110  390588 command_runner.go:130] > # log_size_max = -1
	I1213 10:43:19.437117  390588 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1213 10:43:19.437124  390588 command_runner.go:130] > # log_to_journald = false
	I1213 10:43:19.437130  390588 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1213 10:43:19.437136  390588 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1213 10:43:19.437143  390588 command_runner.go:130] > # Path to directory for container attach sockets.
	I1213 10:43:19.437149  390588 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1213 10:43:19.437160  390588 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1213 10:43:19.437164  390588 command_runner.go:130] > # bind_mount_prefix = ""
	I1213 10:43:19.437170  390588 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1213 10:43:19.437174  390588 command_runner.go:130] > # read_only = false
	I1213 10:43:19.437180  390588 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1213 10:43:19.437188  390588 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1213 10:43:19.437195  390588 command_runner.go:130] > # live configuration reload.
	I1213 10:43:19.437199  390588 command_runner.go:130] > # log_level = "info"
	I1213 10:43:19.437216  390588 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1213 10:43:19.437221  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.437232  390588 command_runner.go:130] > # log_filter = ""
	I1213 10:43:19.437241  390588 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1213 10:43:19.437248  390588 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1213 10:43:19.437252  390588 command_runner.go:130] > # separated by comma.
	I1213 10:43:19.437260  390588 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:43:19.437264  390588 command_runner.go:130] > # uid_mappings = ""
	I1213 10:43:19.437270  390588 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1213 10:43:19.437280  390588 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1213 10:43:19.437285  390588 command_runner.go:130] > # separated by comma.
	I1213 10:43:19.437295  390588 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:43:19.437301  390588 command_runner.go:130] > # gid_mappings = ""
	I1213 10:43:19.437308  390588 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1213 10:43:19.437314  390588 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 10:43:19.437320  390588 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 10:43:19.437331  390588 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:43:19.437335  390588 command_runner.go:130] > # minimum_mappable_uid = -1
	I1213 10:43:19.437345  390588 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1213 10:43:19.437354  390588 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 10:43:19.437361  390588 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 10:43:19.437371  390588 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:43:19.437375  390588 command_runner.go:130] > # minimum_mappable_gid = -1
	I1213 10:43:19.437382  390588 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1213 10:43:19.437390  390588 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1213 10:43:19.437396  390588 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1213 10:43:19.437403  390588 command_runner.go:130] > # ctr_stop_timeout = 30
	I1213 10:43:19.437409  390588 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1213 10:43:19.437416  390588 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1213 10:43:19.437423  390588 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1213 10:43:19.437428  390588 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1213 10:43:19.437432  390588 command_runner.go:130] > # drop_infra_ctr = true
	I1213 10:43:19.437441  390588 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1213 10:43:19.437449  390588 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1213 10:43:19.437457  390588 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1213 10:43:19.437473  390588 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1213 10:43:19.437482  390588 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1213 10:43:19.437491  390588 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1213 10:43:19.437497  390588 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1213 10:43:19.437502  390588 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1213 10:43:19.437506  390588 command_runner.go:130] > # shared_cpuset = ""
	I1213 10:43:19.437511  390588 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1213 10:43:19.437519  390588 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1213 10:43:19.437524  390588 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1213 10:43:19.437534  390588 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1213 10:43:19.437546  390588 command_runner.go:130] > # pinns_path = ""
	I1213 10:43:19.437553  390588 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1213 10:43:19.437560  390588 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1213 10:43:19.437567  390588 command_runner.go:130] > # enable_criu_support = true
	I1213 10:43:19.437573  390588 command_runner.go:130] > # Enable/disable the generation of the container,
	I1213 10:43:19.437579  390588 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1213 10:43:19.437586  390588 command_runner.go:130] > # enable_pod_events = false
	I1213 10:43:19.437593  390588 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 10:43:19.437598  390588 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1213 10:43:19.437604  390588 command_runner.go:130] > # default_runtime = "crun"
	I1213 10:43:19.437609  390588 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1213 10:43:19.437619  390588 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1213 10:43:19.437636  390588 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1213 10:43:19.437642  390588 command_runner.go:130] > # creation as a file is not desired either.
	I1213 10:43:19.437653  390588 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1213 10:43:19.437664  390588 command_runner.go:130] > # the hostname is being managed dynamically.
	I1213 10:43:19.437668  390588 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1213 10:43:19.437672  390588 command_runner.go:130] > # ]
	I1213 10:43:19.437678  390588 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1213 10:43:19.437685  390588 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1213 10:43:19.437693  390588 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1213 10:43:19.437708  390588 command_runner.go:130] > # Each entry in the table should follow the format:
	I1213 10:43:19.437715  390588 command_runner.go:130] > #
	I1213 10:43:19.437724  390588 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1213 10:43:19.437729  390588 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1213 10:43:19.437737  390588 command_runner.go:130] > # runtime_type = "oci"
	I1213 10:43:19.437742  390588 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1213 10:43:19.437752  390588 command_runner.go:130] > # inherit_default_runtime = false
	I1213 10:43:19.437760  390588 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1213 10:43:19.437764  390588 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1213 10:43:19.437769  390588 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1213 10:43:19.437775  390588 command_runner.go:130] > # monitor_env = []
	I1213 10:43:19.437780  390588 command_runner.go:130] > # privileged_without_host_devices = false
	I1213 10:43:19.437787  390588 command_runner.go:130] > # allowed_annotations = []
	I1213 10:43:19.437793  390588 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1213 10:43:19.437799  390588 command_runner.go:130] > # no_sync_log = false
	I1213 10:43:19.437803  390588 command_runner.go:130] > # default_annotations = {}
	I1213 10:43:19.437807  390588 command_runner.go:130] > # stream_websockets = false
	I1213 10:43:19.437810  390588 command_runner.go:130] > # seccomp_profile = ""
	I1213 10:43:19.437838  390588 command_runner.go:130] > # Where:
	I1213 10:43:19.437847  390588 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1213 10:43:19.437854  390588 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1213 10:43:19.437860  390588 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1213 10:43:19.437868  390588 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1213 10:43:19.437874  390588 command_runner.go:130] > #   in $PATH.
	I1213 10:43:19.437880  390588 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1213 10:43:19.437888  390588 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1213 10:43:19.437895  390588 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1213 10:43:19.437898  390588 command_runner.go:130] > #   state.
	I1213 10:43:19.437905  390588 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1213 10:43:19.437913  390588 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1213 10:43:19.437920  390588 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1213 10:43:19.437926  390588 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1213 10:43:19.437932  390588 command_runner.go:130] > #   the values from the default runtime on load time.
	I1213 10:43:19.437938  390588 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1213 10:43:19.437949  390588 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1213 10:43:19.437959  390588 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1213 10:43:19.437971  390588 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1213 10:43:19.437976  390588 command_runner.go:130] > #   The currently recognized values are:
	I1213 10:43:19.437983  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1213 10:43:19.437993  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1213 10:43:19.438000  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1213 10:43:19.438006  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1213 10:43:19.438017  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1213 10:43:19.438026  390588 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1213 10:43:19.438042  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1213 10:43:19.438048  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1213 10:43:19.438055  390588 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1213 10:43:19.438064  390588 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1213 10:43:19.438071  390588 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1213 10:43:19.438079  390588 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1213 10:43:19.438091  390588 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1213 10:43:19.438097  390588 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1213 10:43:19.438104  390588 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1213 10:43:19.438114  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1213 10:43:19.438123  390588 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1213 10:43:19.438128  390588 command_runner.go:130] > #   deprecated option "conmon".
	I1213 10:43:19.438135  390588 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1213 10:43:19.438145  390588 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1213 10:43:19.438153  390588 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1213 10:43:19.438160  390588 command_runner.go:130] > #   should be moved to the container's cgroup
	I1213 10:43:19.438168  390588 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1213 10:43:19.438173  390588 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1213 10:43:19.438182  390588 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1213 10:43:19.438186  390588 command_runner.go:130] > #   conmon-rs by using:
	I1213 10:43:19.438194  390588 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1213 10:43:19.438204  390588 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1213 10:43:19.438215  390588 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1213 10:43:19.438228  390588 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1213 10:43:19.438236  390588 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1213 10:43:19.438246  390588 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1213 10:43:19.438254  390588 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1213 10:43:19.438263  390588 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1213 10:43:19.438271  390588 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1213 10:43:19.438280  390588 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1213 10:43:19.438293  390588 command_runner.go:130] > #   when a machine crash happens.
	I1213 10:43:19.438300  390588 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1213 10:43:19.438308  390588 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1213 10:43:19.438322  390588 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1213 10:43:19.438327  390588 command_runner.go:130] > #   seccomp profile for the runtime.
	I1213 10:43:19.438335  390588 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1213 10:43:19.438343  390588 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1213 10:43:19.438346  390588 command_runner.go:130] > #
	I1213 10:43:19.438350  390588 command_runner.go:130] > # Using the seccomp notifier feature:
	I1213 10:43:19.438353  390588 command_runner.go:130] > #
	I1213 10:43:19.438359  390588 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1213 10:43:19.438370  390588 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1213 10:43:19.438376  390588 command_runner.go:130] > #
	I1213 10:43:19.438383  390588 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1213 10:43:19.438392  390588 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1213 10:43:19.438395  390588 command_runner.go:130] > #
	I1213 10:43:19.438401  390588 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1213 10:43:19.438406  390588 command_runner.go:130] > # feature.
	I1213 10:43:19.438410  390588 command_runner.go:130] > #
	I1213 10:43:19.438416  390588 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1213 10:43:19.438422  390588 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1213 10:43:19.438431  390588 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1213 10:43:19.438437  390588 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1213 10:43:19.438447  390588 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1213 10:43:19.438450  390588 command_runner.go:130] > #
	I1213 10:43:19.438456  390588 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1213 10:43:19.438465  390588 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1213 10:43:19.438471  390588 command_runner.go:130] > #
	I1213 10:43:19.438478  390588 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1213 10:43:19.438486  390588 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1213 10:43:19.438491  390588 command_runner.go:130] > #
	I1213 10:43:19.438497  390588 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1213 10:43:19.438512  390588 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1213 10:43:19.438516  390588 command_runner.go:130] > # limitation.
	I1213 10:43:19.438523  390588 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1213 10:43:19.438528  390588 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1213 10:43:19.438533  390588 command_runner.go:130] > runtime_type = ""
	I1213 10:43:19.438539  390588 command_runner.go:130] > runtime_root = "/run/crun"
	I1213 10:43:19.438543  390588 command_runner.go:130] > inherit_default_runtime = false
	I1213 10:43:19.438549  390588 command_runner.go:130] > runtime_config_path = ""
	I1213 10:43:19.438553  390588 command_runner.go:130] > container_min_memory = ""
	I1213 10:43:19.438560  390588 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 10:43:19.438564  390588 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 10:43:19.438577  390588 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 10:43:19.438581  390588 command_runner.go:130] > allowed_annotations = [
	I1213 10:43:19.438586  390588 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1213 10:43:19.438589  390588 command_runner.go:130] > ]
	I1213 10:43:19.438594  390588 command_runner.go:130] > privileged_without_host_devices = false
	I1213 10:43:19.438599  390588 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1213 10:43:19.438604  390588 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1213 10:43:19.438610  390588 command_runner.go:130] > runtime_type = ""
	I1213 10:43:19.438614  390588 command_runner.go:130] > runtime_root = "/run/runc"
	I1213 10:43:19.438617  390588 command_runner.go:130] > inherit_default_runtime = false
	I1213 10:43:19.438621  390588 command_runner.go:130] > runtime_config_path = ""
	I1213 10:43:19.438625  390588 command_runner.go:130] > container_min_memory = ""
	I1213 10:43:19.438633  390588 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 10:43:19.438639  390588 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 10:43:19.438644  390588 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 10:43:19.438649  390588 command_runner.go:130] > privileged_without_host_devices = false
	I1213 10:43:19.438664  390588 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1213 10:43:19.438673  390588 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1213 10:43:19.438684  390588 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1213 10:43:19.438692  390588 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1213 10:43:19.438702  390588 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1213 10:43:19.438712  390588 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1213 10:43:19.438728  390588 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1213 10:43:19.438734  390588 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1213 10:43:19.438743  390588 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1213 10:43:19.438755  390588 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1213 10:43:19.438761  390588 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1213 10:43:19.438772  390588 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1213 10:43:19.438778  390588 command_runner.go:130] > # Example:
	I1213 10:43:19.438782  390588 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1213 10:43:19.438787  390588 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1213 10:43:19.438793  390588 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1213 10:43:19.438801  390588 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1213 10:43:19.438806  390588 command_runner.go:130] > # cpuset = "0-1"
	I1213 10:43:19.438810  390588 command_runner.go:130] > # cpushares = "5"
	I1213 10:43:19.438814  390588 command_runner.go:130] > # cpuquota = "1000"
	I1213 10:43:19.438820  390588 command_runner.go:130] > # cpuperiod = "100000"
	I1213 10:43:19.438825  390588 command_runner.go:130] > # cpulimit = "35"
	I1213 10:43:19.438837  390588 command_runner.go:130] > # Where:
	I1213 10:43:19.438841  390588 command_runner.go:130] > # The workload name is workload-type.
	I1213 10:43:19.438852  390588 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1213 10:43:19.438861  390588 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1213 10:43:19.438866  390588 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1213 10:43:19.438875  390588 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1213 10:43:19.438880  390588 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1213 10:43:19.438885  390588 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1213 10:43:19.438894  390588 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1213 10:43:19.438905  390588 command_runner.go:130] > # Default value is set to true
	I1213 10:43:19.438910  390588 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1213 10:43:19.438915  390588 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1213 10:43:19.438925  390588 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1213 10:43:19.438932  390588 command_runner.go:130] > # Default value is set to 'false'
	I1213 10:43:19.438938  390588 command_runner.go:130] > # disable_hostport_mapping = false
	I1213 10:43:19.438943  390588 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1213 10:43:19.438951  390588 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1213 10:43:19.438954  390588 command_runner.go:130] > # timezone = ""
	I1213 10:43:19.438961  390588 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1213 10:43:19.438967  390588 command_runner.go:130] > #
	I1213 10:43:19.438973  390588 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1213 10:43:19.438979  390588 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1213 10:43:19.438983  390588 command_runner.go:130] > [crio.image]
	I1213 10:43:19.438993  390588 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1213 10:43:19.438999  390588 command_runner.go:130] > # default_transport = "docker://"
	I1213 10:43:19.439005  390588 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1213 10:43:19.439015  390588 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1213 10:43:19.439019  390588 command_runner.go:130] > # global_auth_file = ""
	I1213 10:43:19.439024  390588 command_runner.go:130] > # The image used to instantiate infra containers.
	I1213 10:43:19.439029  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.439034  390588 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1213 10:43:19.439040  390588 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1213 10:43:19.439048  390588 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1213 10:43:19.439055  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.439060  390588 command_runner.go:130] > # pause_image_auth_file = ""
	I1213 10:43:19.439066  390588 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1213 10:43:19.439072  390588 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1213 10:43:19.439081  390588 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1213 10:43:19.439087  390588 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1213 10:43:19.439094  390588 command_runner.go:130] > # pause_command = "/pause"
	I1213 10:43:19.439100  390588 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1213 10:43:19.439106  390588 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1213 10:43:19.439111  390588 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1213 10:43:19.439117  390588 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1213 10:43:19.439123  390588 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1213 10:43:19.439134  390588 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1213 10:43:19.439142  390588 command_runner.go:130] > # pinned_images = [
	I1213 10:43:19.439145  390588 command_runner.go:130] > # ]
	I1213 10:43:19.439151  390588 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1213 10:43:19.439157  390588 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1213 10:43:19.439166  390588 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1213 10:43:19.439172  390588 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1213 10:43:19.439180  390588 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1213 10:43:19.439184  390588 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1213 10:43:19.439190  390588 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1213 10:43:19.439197  390588 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1213 10:43:19.439203  390588 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1213 10:43:19.439209  390588 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1213 10:43:19.439223  390588 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1213 10:43:19.439228  390588 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1213 10:43:19.439234  390588 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1213 10:43:19.439243  390588 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1213 10:43:19.439247  390588 command_runner.go:130] > # changing them here.
	I1213 10:43:19.439253  390588 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1213 10:43:19.439260  390588 command_runner.go:130] > # insecure_registries = [
	I1213 10:43:19.439263  390588 command_runner.go:130] > # ]
	I1213 10:43:19.439268  390588 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1213 10:43:19.439273  390588 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1213 10:43:19.439723  390588 command_runner.go:130] > # image_volumes = "mkdir"
	I1213 10:43:19.439741  390588 command_runner.go:130] > # Temporary directory to use for storing big files
	I1213 10:43:19.439879  390588 command_runner.go:130] > # big_files_temporary_dir = ""
	I1213 10:43:19.439918  390588 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1213 10:43:19.439927  390588 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1213 10:43:19.439931  390588 command_runner.go:130] > # auto_reload_registries = false
	I1213 10:43:19.439937  390588 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1213 10:43:19.439946  390588 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1213 10:43:19.439958  390588 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1213 10:43:19.439963  390588 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1213 10:43:19.439974  390588 command_runner.go:130] > # The mode of short name resolution.
	I1213 10:43:19.439985  390588 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1213 10:43:19.439993  390588 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1213 10:43:19.440002  390588 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1213 10:43:19.440006  390588 command_runner.go:130] > # short_name_mode = "enforcing"
	I1213 10:43:19.440012  390588 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1213 10:43:19.440018  390588 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1213 10:43:19.440023  390588 command_runner.go:130] > # oci_artifact_mount_support = true
	I1213 10:43:19.440029  390588 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1213 10:43:19.440034  390588 command_runner.go:130] > # CNI plugins.
	I1213 10:43:19.440037  390588 command_runner.go:130] > [crio.network]
	I1213 10:43:19.440044  390588 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1213 10:43:19.440053  390588 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1213 10:43:19.440058  390588 command_runner.go:130] > # cni_default_network = ""
	I1213 10:43:19.440064  390588 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1213 10:43:19.440073  390588 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1213 10:43:19.440080  390588 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1213 10:43:19.440084  390588 command_runner.go:130] > # plugin_dirs = [
	I1213 10:43:19.440211  390588 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1213 10:43:19.440357  390588 command_runner.go:130] > # ]
	I1213 10:43:19.440384  390588 command_runner.go:130] > # List of included pod metrics.
	I1213 10:43:19.440392  390588 command_runner.go:130] > # included_pod_metrics = [
	I1213 10:43:19.440401  390588 command_runner.go:130] > # ]
	I1213 10:43:19.440408  390588 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1213 10:43:19.440418  390588 command_runner.go:130] > [crio.metrics]
	I1213 10:43:19.440423  390588 command_runner.go:130] > # Globally enable or disable metrics support.
	I1213 10:43:19.440436  390588 command_runner.go:130] > # enable_metrics = false
	I1213 10:43:19.440441  390588 command_runner.go:130] > # Specify enabled metrics collectors.
	I1213 10:43:19.440446  390588 command_runner.go:130] > # Per default all metrics are enabled.
	I1213 10:43:19.440452  390588 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1213 10:43:19.440460  390588 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1213 10:43:19.440472  390588 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1213 10:43:19.440477  390588 command_runner.go:130] > # metrics_collectors = [
	I1213 10:43:19.440481  390588 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1213 10:43:19.440496  390588 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1213 10:43:19.440501  390588 command_runner.go:130] > # 	"containers_oom_total",
	I1213 10:43:19.440506  390588 command_runner.go:130] > # 	"processes_defunct",
	I1213 10:43:19.440509  390588 command_runner.go:130] > # 	"operations_total",
	I1213 10:43:19.440637  390588 command_runner.go:130] > # 	"operations_latency_seconds",
	I1213 10:43:19.440664  390588 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1213 10:43:19.440670  390588 command_runner.go:130] > # 	"operations_errors_total",
	I1213 10:43:19.440688  390588 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1213 10:43:19.440696  390588 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1213 10:43:19.440701  390588 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1213 10:43:19.440705  390588 command_runner.go:130] > # 	"image_pulls_success_total",
	I1213 10:43:19.440716  390588 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1213 10:43:19.440720  390588 command_runner.go:130] > # 	"containers_oom_count_total",
	I1213 10:43:19.440726  390588 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1213 10:43:19.440734  390588 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1213 10:43:19.440739  390588 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1213 10:43:19.440742  390588 command_runner.go:130] > # ]
	I1213 10:43:19.440749  390588 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1213 10:43:19.440758  390588 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1213 10:43:19.440764  390588 command_runner.go:130] > # The port on which the metrics server will listen.
	I1213 10:43:19.440768  390588 command_runner.go:130] > # metrics_port = 9090
	I1213 10:43:19.440773  390588 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1213 10:43:19.440901  390588 command_runner.go:130] > # metrics_socket = ""
	I1213 10:43:19.440915  390588 command_runner.go:130] > # The certificate for the secure metrics server.
	I1213 10:43:19.440937  390588 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1213 10:43:19.440950  390588 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1213 10:43:19.440955  390588 command_runner.go:130] > # certificate on any modification event.
	I1213 10:43:19.440959  390588 command_runner.go:130] > # metrics_cert = ""
	I1213 10:43:19.440964  390588 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1213 10:43:19.440969  390588 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1213 10:43:19.440972  390588 command_runner.go:130] > # metrics_key = ""
	I1213 10:43:19.440978  390588 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1213 10:43:19.440982  390588 command_runner.go:130] > [crio.tracing]
	I1213 10:43:19.440995  390588 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1213 10:43:19.441000  390588 command_runner.go:130] > # enable_tracing = false
	I1213 10:43:19.441006  390588 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1213 10:43:19.441015  390588 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1213 10:43:19.441022  390588 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1213 10:43:19.441031  390588 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1213 10:43:19.441039  390588 command_runner.go:130] > # CRI-O NRI configuration.
	I1213 10:43:19.441042  390588 command_runner.go:130] > [crio.nri]
	I1213 10:43:19.441047  390588 command_runner.go:130] > # Globally enable or disable NRI.
	I1213 10:43:19.441253  390588 command_runner.go:130] > # enable_nri = true
	I1213 10:43:19.441268  390588 command_runner.go:130] > # NRI socket to listen on.
	I1213 10:43:19.441274  390588 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1213 10:43:19.441278  390588 command_runner.go:130] > # NRI plugin directory to use.
	I1213 10:43:19.441283  390588 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1213 10:43:19.441288  390588 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1213 10:43:19.441293  390588 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1213 10:43:19.441298  390588 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1213 10:43:19.441355  390588 command_runner.go:130] > # nri_disable_connections = false
	I1213 10:43:19.441365  390588 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1213 10:43:19.441370  390588 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1213 10:43:19.441374  390588 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1213 10:43:19.441379  390588 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1213 10:43:19.441384  390588 command_runner.go:130] > # NRI default validator configuration.
	I1213 10:43:19.441391  390588 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1213 10:43:19.441401  390588 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1213 10:43:19.441405  390588 command_runner.go:130] > # can be restricted/rejected:
	I1213 10:43:19.441417  390588 command_runner.go:130] > # - OCI hook injection
	I1213 10:43:19.441427  390588 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1213 10:43:19.441435  390588 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1213 10:43:19.441440  390588 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1213 10:43:19.441444  390588 command_runner.go:130] > # - adjustment of linux namespaces
	I1213 10:43:19.441453  390588 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1213 10:43:19.441460  390588 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1213 10:43:19.441466  390588 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1213 10:43:19.441469  390588 command_runner.go:130] > #
	I1213 10:43:19.441473  390588 command_runner.go:130] > # [crio.nri.default_validator]
	I1213 10:43:19.441480  390588 command_runner.go:130] > # nri_enable_default_validator = false
	I1213 10:43:19.441485  390588 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1213 10:43:19.441629  390588 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1213 10:43:19.441658  390588 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1213 10:43:19.441671  390588 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1213 10:43:19.441677  390588 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1213 10:43:19.441685  390588 command_runner.go:130] > # nri_validator_required_plugins = [
	I1213 10:43:19.441688  390588 command_runner.go:130] > # ]
	I1213 10:43:19.441694  390588 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1213 10:43:19.441700  390588 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1213 10:43:19.441709  390588 command_runner.go:130] > [crio.stats]
	I1213 10:43:19.441720  390588 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1213 10:43:19.441730  390588 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1213 10:43:19.441734  390588 command_runner.go:130] > # stats_collection_period = 0
	I1213 10:43:19.441743  390588 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1213 10:43:19.441752  390588 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1213 10:43:19.441756  390588 command_runner.go:130] > # collection_period = 0
	I1213 10:43:19.443275  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.403988128Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1213 10:43:19.443305  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404025092Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1213 10:43:19.443315  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404051931Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1213 10:43:19.443326  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404076596Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1213 10:43:19.443340  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404148548Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:19.443352  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404414955Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1213 10:43:19.443364  390588 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1213 10:43:19.443836  390588 cni.go:84] Creating CNI manager for ""
	I1213 10:43:19.443854  390588 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:43:19.443875  390588 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:43:19.443898  390588 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-407525 NodeName:functional-407525 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:43:19.444025  390588 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-407525"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:43:19.444095  390588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:43:19.450891  390588 command_runner.go:130] > kubeadm
	I1213 10:43:19.450967  390588 command_runner.go:130] > kubectl
	I1213 10:43:19.450987  390588 command_runner.go:130] > kubelet
	I1213 10:43:19.451803  390588 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:43:19.451864  390588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:43:19.459352  390588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 10:43:19.471938  390588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:43:19.485136  390588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 10:43:19.498010  390588 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:43:19.501925  390588 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 10:43:19.502045  390588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:43:19.620049  390588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:43:20.022042  390588 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525 for IP: 192.168.49.2
	I1213 10:43:20.022188  390588 certs.go:195] generating shared ca certs ...
	I1213 10:43:20.022221  390588 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:43:20.022446  390588 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 10:43:20.022567  390588 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 10:43:20.022606  390588 certs.go:257] generating profile certs ...
	I1213 10:43:20.022771  390588 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key
	I1213 10:43:20.022893  390588 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key.2185ee04
	I1213 10:43:20.023000  390588 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key
	I1213 10:43:20.023048  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 10:43:20.023081  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 10:43:20.023123  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 10:43:20.023158  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 10:43:20.023202  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 10:43:20.023238  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 10:43:20.023279  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 10:43:20.023318  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 10:43:20.023431  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 10:43:20.023496  390588 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 10:43:20.023540  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:43:20.023607  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:43:20.023670  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:43:20.023728  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 10:43:20.023828  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 10:43:20.023897  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.023941  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem -> /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.023985  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.024591  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:43:20.049939  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:43:20.071962  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:43:20.093520  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:43:20.117621  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:43:20.135349  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:43:20.152883  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:43:20.170121  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 10:43:20.188254  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:43:20.205892  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 10:43:20.223561  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 10:43:20.241467  390588 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:43:20.254691  390588 ssh_runner.go:195] Run: openssl version
	I1213 10:43:20.260777  390588 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 10:43:20.261193  390588 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.268769  390588 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 10:43:20.276440  390588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.280293  390588 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.280332  390588 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.280379  390588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.320848  390588 command_runner.go:130] > 3ec20f2e
	I1213 10:43:20.321296  390588 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:43:20.328708  390588 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.335901  390588 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:43:20.343392  390588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.347019  390588 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.347264  390588 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.347323  390588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.388019  390588 command_runner.go:130] > b5213941
	I1213 10:43:20.388604  390588 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:43:20.396066  390588 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.403389  390588 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 10:43:20.410914  390588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.414772  390588 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.414823  390588 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.414888  390588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.455731  390588 command_runner.go:130] > 51391683
	I1213 10:43:20.456248  390588 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:43:20.463583  390588 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:43:20.467136  390588 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:43:20.467160  390588 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 10:43:20.467167  390588 command_runner.go:130] > Device: 259,1	Inode: 1322536     Links: 1
	I1213 10:43:20.467174  390588 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:43:20.467180  390588 command_runner.go:130] > Access: 2025-12-13 10:39:12.482590700 +0000
	I1213 10:43:20.467186  390588 command_runner.go:130] > Modify: 2025-12-13 10:35:08.216365089 +0000
	I1213 10:43:20.467191  390588 command_runner.go:130] > Change: 2025-12-13 10:35:08.216365089 +0000
	I1213 10:43:20.467197  390588 command_runner.go:130] >  Birth: 2025-12-13 10:35:08.216365089 +0000
	I1213 10:43:20.467264  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:43:20.507794  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.508276  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:43:20.549373  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.549450  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:43:20.591501  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.592041  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:43:20.633163  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.633239  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:43:20.673681  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.674235  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:43:20.714863  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.715372  390588 kubeadm.go:401] StartCluster: {Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:43:20.715472  390588 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:43:20.715572  390588 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:43:20.742591  390588 cri.go:89] found id: ""
	I1213 10:43:20.742663  390588 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:43:20.749676  390588 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 10:43:20.749696  390588 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 10:43:20.749703  390588 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 10:43:20.750605  390588 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:43:20.750650  390588 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:43:20.750723  390588 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:43:20.758246  390588 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:43:20.758662  390588 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-407525" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:20.758765  390588 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-354468/kubeconfig needs updating (will repair): [kubeconfig missing "functional-407525" cluster setting kubeconfig missing "functional-407525" context setting]
	I1213 10:43:20.759076  390588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:43:20.759474  390588 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:20.759724  390588 kapi.go:59] client config for functional-407525: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key", CAFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:43:20.760259  390588 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 10:43:20.760282  390588 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 10:43:20.760289  390588 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 10:43:20.760294  390588 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 10:43:20.760299  390588 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 10:43:20.760595  390588 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:43:20.760675  390588 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 10:43:20.768313  390588 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 10:43:20.768394  390588 kubeadm.go:602] duration metric: took 17.723293ms to restartPrimaryControlPlane
	I1213 10:43:20.768419  390588 kubeadm.go:403] duration metric: took 53.05457ms to StartCluster
	I1213 10:43:20.768469  390588 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:43:20.768581  390588 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:20.769195  390588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:43:20.769470  390588 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:43:20.769730  390588 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:43:20.769792  390588 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:43:20.769868  390588 addons.go:70] Setting storage-provisioner=true in profile "functional-407525"
	I1213 10:43:20.769887  390588 addons.go:239] Setting addon storage-provisioner=true in "functional-407525"
	I1213 10:43:20.769967  390588 host.go:66] Checking if "functional-407525" exists ...
	I1213 10:43:20.770424  390588 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:43:20.770582  390588 addons.go:70] Setting default-storageclass=true in profile "functional-407525"
	I1213 10:43:20.770602  390588 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-407525"
	I1213 10:43:20.770845  390588 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:43:20.776047  390588 out.go:179] * Verifying Kubernetes components...
	I1213 10:43:20.778873  390588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:43:20.803376  390588 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:43:20.806823  390588 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:20.806848  390588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:43:20.806911  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:20.815503  390588 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:20.815748  390588 kapi.go:59] client config for functional-407525: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key", CAFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:43:20.816048  390588 addons.go:239] Setting addon default-storageclass=true in "functional-407525"
	I1213 10:43:20.816085  390588 host.go:66] Checking if "functional-407525" exists ...
	I1213 10:43:20.816499  390588 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:43:20.849236  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:20.860497  390588 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:20.860524  390588 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:43:20.860587  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:20.893135  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:20.991835  390588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:43:21.017033  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:21.050080  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:21.773497  390588 node_ready.go:35] waiting up to 6m0s for node "functional-407525" to be "Ready" ...
	I1213 10:43:21.773656  390588 type.go:168] "Request Body" body=""
	I1213 10:43:21.773729  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:21.774009  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:21.774035  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:21.774063  390588 retry.go:31] will retry after 178.71376ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:21.774107  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:21.774121  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:21.774127  390588 retry.go:31] will retry after 267.498ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:21.774194  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:21.953713  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:22.014320  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.018022  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.018057  390588 retry.go:31] will retry after 328.520116ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.042240  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:22.097866  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.101425  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.101460  390588 retry.go:31] will retry after 340.23882ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.273721  390588 type.go:168] "Request Body" body=""
	I1213 10:43:22.273821  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:22.274173  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:22.347588  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:22.405090  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.408724  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.408759  390588 retry.go:31] will retry after 330.053163ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.441890  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:22.497250  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.500831  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.500864  390588 retry.go:31] will retry after 301.657591ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.739051  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:22.774467  390588 type.go:168] "Request Body" body=""
	I1213 10:43:22.774545  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:22.774882  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:22.796776  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.800408  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.800485  390588 retry.go:31] will retry after 1.110001612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.803607  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:22.863746  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.863797  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.863816  390588 retry.go:31] will retry after 925.323482ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:23.274339  390588 type.go:168] "Request Body" body=""
	I1213 10:43:23.274464  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:23.274793  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:23.774657  390588 type.go:168] "Request Body" body=""
	I1213 10:43:23.774742  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:23.775115  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:23.775193  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:23.789322  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:23.850165  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:23.853613  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:23.853701  390588 retry.go:31] will retry after 1.468677433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:23.910870  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:23.967004  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:23.970690  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:23.970723  390588 retry.go:31] will retry after 1.30336677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:24.274187  390588 type.go:168] "Request Body" body=""
	I1213 10:43:24.274270  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:24.274613  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:24.773719  390588 type.go:168] "Request Body" body=""
	I1213 10:43:24.773812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:24.774104  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:25.273868  390588 type.go:168] "Request Body" body=""
	I1213 10:43:25.273973  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:25.274299  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:25.274422  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:25.322752  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:25.335088  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:25.335126  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:25.335146  390588 retry.go:31] will retry after 1.31175111s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:25.389173  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:25.389228  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:25.389247  390588 retry.go:31] will retry after 1.937290048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:25.773818  390588 type.go:168] "Request Body" body=""
	I1213 10:43:25.773896  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:25.774238  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:26.274714  390588 type.go:168] "Request Body" body=""
	I1213 10:43:26.274790  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:26.275116  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:26.275175  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:26.647823  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:26.708762  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:26.708815  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:26.708835  390588 retry.go:31] will retry after 2.338895321s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:26.773966  390588 type.go:168] "Request Body" body=""
	I1213 10:43:26.774052  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:26.774373  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:27.273820  390588 type.go:168] "Request Body" body=""
	I1213 10:43:27.273894  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:27.274223  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:27.327657  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:27.389087  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:27.389124  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:27.389154  390588 retry.go:31] will retry after 3.77996712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:27.774250  390588 type.go:168] "Request Body" body=""
	I1213 10:43:27.774347  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:27.774610  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:28.274520  390588 type.go:168] "Request Body" body=""
	I1213 10:43:28.274639  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:28.275025  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:28.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:43:28.773830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:28.774175  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:28.774230  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:29.048671  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:29.108913  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:29.108956  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:29.108976  390588 retry.go:31] will retry after 6.196055786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:29.274133  390588 type.go:168] "Request Body" body=""
	I1213 10:43:29.274210  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:29.274535  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:29.774410  390588 type.go:168] "Request Body" body=""
	I1213 10:43:29.774493  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:29.774856  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:30.274678  390588 type.go:168] "Request Body" body=""
	I1213 10:43:30.274752  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:30.275098  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:30.774546  390588 type.go:168] "Request Body" body=""
	I1213 10:43:30.774615  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:30.774881  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:30.774922  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:31.169380  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:31.223779  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:31.227282  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:31.227315  390588 retry.go:31] will retry after 4.701439473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:31.274644  390588 type.go:168] "Request Body" body=""
	I1213 10:43:31.274723  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:31.275035  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:31.773748  390588 type.go:168] "Request Body" body=""
	I1213 10:43:31.773838  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:31.774143  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:32.273714  390588 type.go:168] "Request Body" body=""
	I1213 10:43:32.273813  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:32.274119  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:32.773782  390588 type.go:168] "Request Body" body=""
	I1213 10:43:32.773855  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:32.774160  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:33.273748  390588 type.go:168] "Request Body" body=""
	I1213 10:43:33.273823  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:33.274181  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:33.274234  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:33.773733  390588 type.go:168] "Request Body" body=""
	I1213 10:43:33.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:33.774115  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:34.273812  390588 type.go:168] "Request Body" body=""
	I1213 10:43:34.273904  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:34.274296  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:34.773742  390588 type.go:168] "Request Body" body=""
	I1213 10:43:34.773818  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:34.774139  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:35.273828  390588 type.go:168] "Request Body" body=""
	I1213 10:43:35.273922  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:35.274192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:35.305578  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:35.371590  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:35.371636  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:35.371657  390588 retry.go:31] will retry after 5.458500829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:35.773766  390588 type.go:168] "Request Body" body=""
	I1213 10:43:35.773846  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:35.774186  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:35.774236  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:35.929536  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:35.989448  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:35.989487  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:35.989506  390588 retry.go:31] will retry after 5.007301518s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:36.274095  390588 type.go:168] "Request Body" body=""
	I1213 10:43:36.274168  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:36.274462  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:36.774043  390588 type.go:168] "Request Body" body=""
	I1213 10:43:36.774126  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:36.774417  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:37.273790  390588 type.go:168] "Request Body" body=""
	I1213 10:43:37.273882  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:37.274210  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:37.773915  390588 type.go:168] "Request Body" body=""
	I1213 10:43:37.773996  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:37.774325  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:37.774386  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:38.274036  390588 type.go:168] "Request Body" body=""
	I1213 10:43:38.274110  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:38.274365  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:38.773780  390588 type.go:168] "Request Body" body=""
	I1213 10:43:38.773871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:38.774179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:39.273872  390588 type.go:168] "Request Body" body=""
	I1213 10:43:39.273948  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:39.274270  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:39.773709  390588 type.go:168] "Request Body" body=""
	I1213 10:43:39.773784  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:39.774053  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:40.273825  390588 type.go:168] "Request Body" body=""
	I1213 10:43:40.273899  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:40.274244  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:40.274309  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:40.774007  390588 type.go:168] "Request Body" body=""
	I1213 10:43:40.774083  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:40.774431  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:40.830857  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:40.888820  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:40.888869  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:40.888889  390588 retry.go:31] will retry after 11.437774943s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:40.997102  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:41.058447  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:41.058511  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:41.058532  390588 retry.go:31] will retry after 7.34875984s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:41.275648  390588 type.go:168] "Request Body" body=""
	I1213 10:43:41.275736  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:41.275995  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:41.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:43:41.773833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:41.774173  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:42.273927  390588 type.go:168] "Request Body" body=""
	I1213 10:43:42.274020  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:42.274372  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:42.274432  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:42.773693  390588 type.go:168] "Request Body" body=""
	I1213 10:43:42.773768  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:42.774092  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:43.273808  390588 type.go:168] "Request Body" body=""
	I1213 10:43:43.273880  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:43.274204  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:43.773920  390588 type.go:168] "Request Body" body=""
	I1213 10:43:43.774021  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:43.774340  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:44.274591  390588 type.go:168] "Request Body" body=""
	I1213 10:43:44.274666  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:44.274925  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:44.274974  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:44.773692  390588 type.go:168] "Request Body" body=""
	I1213 10:43:44.773775  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:44.774117  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:45.273902  390588 type.go:168] "Request Body" body=""
	I1213 10:43:45.273985  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:45.274305  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:45.773737  390588 type.go:168] "Request Body" body=""
	I1213 10:43:45.773808  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:45.774115  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:46.273797  390588 type.go:168] "Request Body" body=""
	I1213 10:43:46.273879  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:46.274217  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:46.774024  390588 type.go:168] "Request Body" body=""
	I1213 10:43:46.774120  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:46.774453  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:46.774515  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:47.273671  390588 type.go:168] "Request Body" body=""
	I1213 10:43:47.273742  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:47.274050  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:47.773764  390588 type.go:168] "Request Body" body=""
	I1213 10:43:47.773857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:47.774219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:48.273933  390588 type.go:168] "Request Body" body=""
	I1213 10:43:48.274033  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:48.274397  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:48.407754  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:48.470395  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:48.474021  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:48.474053  390588 retry.go:31] will retry after 19.108505533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:48.774398  390588 type.go:168] "Request Body" body=""
	I1213 10:43:48.774473  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:48.774751  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:48.774803  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:49.274554  390588 type.go:168] "Request Body" body=""
	I1213 10:43:49.274627  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:49.274988  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:49.773726  390588 type.go:168] "Request Body" body=""
	I1213 10:43:49.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:49.774191  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:50.273886  390588 type.go:168] "Request Body" body=""
	I1213 10:43:50.273967  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:50.274244  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:50.774213  390588 type.go:168] "Request Body" body=""
	I1213 10:43:50.774312  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:50.774666  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:51.274525  390588 type.go:168] "Request Body" body=""
	I1213 10:43:51.274611  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:51.274924  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:51.274971  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:51.774634  390588 type.go:168] "Request Body" body=""
	I1213 10:43:51.774715  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:51.774977  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:52.273715  390588 type.go:168] "Request Body" body=""
	I1213 10:43:52.273797  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:52.274174  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:52.327551  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:52.388989  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:52.389038  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:52.389058  390588 retry.go:31] will retry after 15.332526016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:52.774665  390588 type.go:168] "Request Body" body=""
	I1213 10:43:52.774747  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:52.775066  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:53.273766  390588 type.go:168] "Request Body" body=""
	I1213 10:43:53.273838  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:53.274095  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:53.773791  390588 type.go:168] "Request Body" body=""
	I1213 10:43:53.773894  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:53.774202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:53.774258  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:54.273942  390588 type.go:168] "Request Body" body=""
	I1213 10:43:54.274024  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:54.274379  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:54.774619  390588 type.go:168] "Request Body" body=""
	I1213 10:43:54.774685  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:54.774981  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:55.273695  390588 type.go:168] "Request Body" body=""
	I1213 10:43:55.273772  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:55.274098  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:55.774730  390588 type.go:168] "Request Body" body=""
	I1213 10:43:55.774809  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:55.775152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:55.775209  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:56.273860  390588 type.go:168] "Request Body" body=""
	I1213 10:43:56.273937  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:56.274197  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:56.773778  390588 type.go:168] "Request Body" body=""
	I1213 10:43:56.773872  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:56.774188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:57.273779  390588 type.go:168] "Request Body" body=""
	I1213 10:43:57.273871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:57.274186  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:57.774399  390588 type.go:168] "Request Body" body=""
	I1213 10:43:57.774475  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:57.774745  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:58.274628  390588 type.go:168] "Request Body" body=""
	I1213 10:43:58.274703  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:58.275023  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:58.275075  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:58.773728  390588 type.go:168] "Request Body" body=""
	I1213 10:43:58.773808  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:58.774138  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:59.274411  390588 type.go:168] "Request Body" body=""
	I1213 10:43:59.274483  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:59.274749  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:59.774554  390588 type.go:168] "Request Body" body=""
	I1213 10:43:59.774628  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:59.774978  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:00.273734  390588 type.go:168] "Request Body" body=""
	I1213 10:44:00.273827  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:00.274198  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:00.774634  390588 type.go:168] "Request Body" body=""
	I1213 10:44:00.774714  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:00.775059  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:00.775121  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:01.273670  390588 type.go:168] "Request Body" body=""
	I1213 10:44:01.273742  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:01.274061  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:01.773708  390588 type.go:168] "Request Body" body=""
	I1213 10:44:01.773778  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:01.774062  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:02.273799  390588 type.go:168] "Request Body" body=""
	I1213 10:44:02.273872  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:02.274204  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:02.773760  390588 type.go:168] "Request Body" body=""
	I1213 10:44:02.773840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:02.774185  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:03.273713  390588 type.go:168] "Request Body" body=""
	I1213 10:44:03.273804  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:03.274108  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:03.274159  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:03.773781  390588 type.go:168] "Request Body" body=""
	I1213 10:44:03.773856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:03.774368  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:04.273809  390588 type.go:168] "Request Body" body=""
	I1213 10:44:04.273910  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:04.274228  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:04.773901  390588 type.go:168] "Request Body" body=""
	I1213 10:44:04.773977  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:04.774242  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:05.273787  390588 type.go:168] "Request Body" body=""
	I1213 10:44:05.273861  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:05.274193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:05.274252  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:05.773910  390588 type.go:168] "Request Body" body=""
	I1213 10:44:05.774005  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:05.774314  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:06.274302  390588 type.go:168] "Request Body" body=""
	I1213 10:44:06.274372  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:06.274644  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:06.774485  390588 type.go:168] "Request Body" body=""
	I1213 10:44:06.774567  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:06.774982  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:07.273730  390588 type.go:168] "Request Body" body=""
	I1213 10:44:07.273828  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:07.274146  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:07.583825  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:44:07.646535  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:07.646580  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:07.646600  390588 retry.go:31] will retry after 14.697551715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:07.722798  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:44:07.774314  390588 type.go:168] "Request Body" body=""
	I1213 10:44:07.774386  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:07.774682  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:07.774739  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:07.791129  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:07.791173  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:07.791194  390588 retry.go:31] will retry after 13.531528334s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:08.273899  390588 type.go:168] "Request Body" body=""
	I1213 10:44:08.273980  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:08.274336  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:08.774067  390588 type.go:168] "Request Body" body=""
	I1213 10:44:08.774147  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:08.774508  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:09.274290  390588 type.go:168] "Request Body" body=""
	I1213 10:44:09.274369  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:09.274678  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:09.774447  390588 type.go:168] "Request Body" body=""
	I1213 10:44:09.774528  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:09.774864  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:09.774936  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:10.274570  390588 type.go:168] "Request Body" body=""
	I1213 10:44:10.274657  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:10.274961  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:10.774562  390588 type.go:168] "Request Body" body=""
	I1213 10:44:10.774642  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:10.774915  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:11.273679  390588 type.go:168] "Request Body" body=""
	I1213 10:44:11.273789  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:11.274110  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:11.773783  390588 type.go:168] "Request Body" body=""
	I1213 10:44:11.773865  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:11.774164  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:12.273719  390588 type.go:168] "Request Body" body=""
	I1213 10:44:12.273786  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:12.274058  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:12.274098  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:12.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:44:12.773833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:12.774136  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:13.273776  390588 type.go:168] "Request Body" body=""
	I1213 10:44:13.273875  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:13.274215  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:13.773721  390588 type.go:168] "Request Body" body=""
	I1213 10:44:13.773787  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:13.774066  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:14.273794  390588 type.go:168] "Request Body" body=""
	I1213 10:44:14.273871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:14.274227  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:14.274283  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:14.773929  390588 type.go:168] "Request Body" body=""
	I1213 10:44:14.774010  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:14.774363  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:15.273657  390588 type.go:168] "Request Body" body=""
	I1213 10:44:15.273724  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:15.273985  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:15.773757  390588 type.go:168] "Request Body" body=""
	I1213 10:44:15.773863  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:15.774190  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:16.274139  390588 type.go:168] "Request Body" body=""
	I1213 10:44:16.274221  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:16.274567  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:16.274622  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:16.774305  390588 type.go:168] "Request Body" body=""
	I1213 10:44:16.774378  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:16.774644  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:17.274446  390588 type.go:168] "Request Body" body=""
	I1213 10:44:17.274528  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:17.274866  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:17.774497  390588 type.go:168] "Request Body" body=""
	I1213 10:44:17.774575  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:17.774899  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:18.274657  390588 type.go:168] "Request Body" body=""
	I1213 10:44:18.274734  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:18.275051  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:18.275096  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:18.773787  390588 type.go:168] "Request Body" body=""
	I1213 10:44:18.773872  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:18.774209  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:19.273910  390588 type.go:168] "Request Body" body=""
	I1213 10:44:19.273985  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:19.274345  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:19.774026  390588 type.go:168] "Request Body" body=""
	I1213 10:44:19.774099  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:19.774355  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:20.273801  390588 type.go:168] "Request Body" body=""
	I1213 10:44:20.273913  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:20.274223  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:20.773981  390588 type.go:168] "Request Body" body=""
	I1213 10:44:20.774053  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:20.774366  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:20.774423  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:21.274357  390588 type.go:168] "Request Body" body=""
	I1213 10:44:21.274428  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:21.274706  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:21.323061  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:44:21.389635  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:21.389682  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:21.389701  390588 retry.go:31] will retry after 37.789083594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:21.773791  390588 type.go:168] "Request Body" body=""
	I1213 10:44:21.773876  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:21.774224  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:22.273915  390588 type.go:168] "Request Body" body=""
	I1213 10:44:22.273997  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:22.274345  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:22.344570  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:44:22.405449  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:22.405493  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:22.405512  390588 retry.go:31] will retry after 23.725920264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:22.773711  390588 type.go:168] "Request Body" body=""
	I1213 10:44:22.773782  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:22.774033  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:23.273757  390588 type.go:168] "Request Body" body=""
	I1213 10:44:23.273859  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:23.274206  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:23.274261  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:23.773694  390588 type.go:168] "Request Body" body=""
	I1213 10:44:23.773766  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:23.774054  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:24.274441  390588 type.go:168] "Request Body" body=""
	I1213 10:44:24.274518  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:24.274774  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:24.774608  390588 type.go:168] "Request Body" body=""
	I1213 10:44:24.774678  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:24.774999  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:25.274658  390588 type.go:168] "Request Body" body=""
	I1213 10:44:25.274733  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:25.275077  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:25.275131  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:25.774431  390588 type.go:168] "Request Body" body=""
	I1213 10:44:25.774508  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:25.774773  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:26.274739  390588 type.go:168] "Request Body" body=""
	I1213 10:44:26.274817  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:26.275144  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:26.773790  390588 type.go:168] "Request Body" body=""
	I1213 10:44:26.773863  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:26.774173  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:27.274455  390588 type.go:168] "Request Body" body=""
	I1213 10:44:27.274547  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:27.274811  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:27.774572  390588 type.go:168] "Request Body" body=""
	I1213 10:44:27.774642  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:27.774952  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:27.775003  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:28.274705  390588 type.go:168] "Request Body" body=""
	I1213 10:44:28.274777  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:28.275087  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:28.773642  390588 type.go:168] "Request Body" body=""
	I1213 10:44:28.773716  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:28.773982  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:29.273745  390588 type.go:168] "Request Body" body=""
	I1213 10:44:29.273822  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:29.274155  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:29.773835  390588 type.go:168] "Request Body" body=""
	I1213 10:44:29.773917  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:29.774248  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:30.274557  390588 type.go:168] "Request Body" body=""
	I1213 10:44:30.274641  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:30.274916  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:30.274971  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:30.774540  390588 type.go:168] "Request Body" body=""
	I1213 10:44:30.774632  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:30.774962  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:31.273679  390588 type.go:168] "Request Body" body=""
	I1213 10:44:31.273750  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:31.274077  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:31.774321  390588 type.go:168] "Request Body" body=""
	I1213 10:44:31.774386  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:31.774707  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:32.274525  390588 type.go:168] "Request Body" body=""
	I1213 10:44:32.274604  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:32.274936  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:32.274993  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:32.774698  390588 type.go:168] "Request Body" body=""
	I1213 10:44:32.774804  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:32.775108  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:33.274456  390588 type.go:168] "Request Body" body=""
	I1213 10:44:33.274529  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:33.274787  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:33.774581  390588 type.go:168] "Request Body" body=""
	I1213 10:44:33.774664  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:33.775008  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:34.274708  390588 type.go:168] "Request Body" body=""
	I1213 10:44:34.274794  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:34.275152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:34.275214  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:34.773858  390588 type.go:168] "Request Body" body=""
	I1213 10:44:34.773932  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:34.774188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:35.273780  390588 type.go:168] "Request Body" body=""
	I1213 10:44:35.273867  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:35.274233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:35.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:44:35.773852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:35.774179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:36.273930  390588 type.go:168] "Request Body" body=""
	I1213 10:44:36.274033  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:36.274307  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:36.773735  390588 type.go:168] "Request Body" body=""
	I1213 10:44:36.773807  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:36.774161  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:36.774233  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:37.273748  390588 type.go:168] "Request Body" body=""
	I1213 10:44:37.273822  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:37.274140  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:37.774404  390588 type.go:168] "Request Body" body=""
	I1213 10:44:37.774471  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:37.774822  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:38.274598  390588 type.go:168] "Request Body" body=""
	I1213 10:44:38.274669  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:38.274999  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:38.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:44:38.773807  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:38.774142  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:39.274495  390588 type.go:168] "Request Body" body=""
	I1213 10:44:39.274562  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:39.274851  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:39.274908  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:39.774657  390588 type.go:168] "Request Body" body=""
	I1213 10:44:39.774730  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:39.775049  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:40.273772  390588 type.go:168] "Request Body" body=""
	I1213 10:44:40.273847  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:40.274166  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:40.774227  390588 type.go:168] "Request Body" body=""
	I1213 10:44:40.774300  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:40.774572  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:41.274605  390588 type.go:168] "Request Body" body=""
	I1213 10:44:41.274676  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:41.275014  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:41.275084  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:41.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:44:41.773824  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:41.774152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:42.273842  390588 type.go:168] "Request Body" body=""
	I1213 10:44:42.273921  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:42.274231  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:42.773931  390588 type.go:168] "Request Body" body=""
	I1213 10:44:42.774027  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:42.774383  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:43.273973  390588 type.go:168] "Request Body" body=""
	I1213 10:44:43.274062  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:43.274409  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:43.773648  390588 type.go:168] "Request Body" body=""
	I1213 10:44:43.773733  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:43.773987  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:43.774033  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:44.273702  390588 type.go:168] "Request Body" body=""
	I1213 10:44:44.273808  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:44.274146  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:44.773881  390588 type.go:168] "Request Body" body=""
	I1213 10:44:44.773958  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:44.774291  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:45.273983  390588 type.go:168] "Request Body" body=""
	I1213 10:44:45.274063  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:45.274356  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:45.773766  390588 type.go:168] "Request Body" body=""
	I1213 10:44:45.773844  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:45.774176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:45.774231  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:46.131654  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:44:46.194295  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:46.194358  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:46.194451  390588 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:44:46.274603  390588 type.go:168] "Request Body" body=""
	I1213 10:44:46.274700  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:46.275072  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:46.774037  390588 type.go:168] "Request Body" body=""
	I1213 10:44:46.774112  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:46.774387  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:47.273782  390588 type.go:168] "Request Body" body=""
	I1213 10:44:47.273858  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:47.274208  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:47.773755  390588 type.go:168] "Request Body" body=""
	I1213 10:44:47.773830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:47.774174  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:48.273867  390588 type.go:168] "Request Body" body=""
	I1213 10:44:48.273936  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:48.274200  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:48.274241  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:48.773790  390588 type.go:168] "Request Body" body=""
	I1213 10:44:48.773871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:48.774229  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:49.273767  390588 type.go:168] "Request Body" body=""
	I1213 10:44:49.273849  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:49.274193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:49.774519  390588 type.go:168] "Request Body" body=""
	I1213 10:44:49.774595  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:49.774926  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:50.274705  390588 type.go:168] "Request Body" body=""
	I1213 10:44:50.274774  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:50.275102  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:50.275164  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:50.774065  390588 type.go:168] "Request Body" body=""
	I1213 10:44:50.774140  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:50.774471  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:51.274252  390588 type.go:168] "Request Body" body=""
	I1213 10:44:51.274326  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:51.274605  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:51.774340  390588 type.go:168] "Request Body" body=""
	I1213 10:44:51.774416  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:51.774757  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:52.274427  390588 type.go:168] "Request Body" body=""
	I1213 10:44:52.274511  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:52.274882  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:52.774600  390588 type.go:168] "Request Body" body=""
	I1213 10:44:52.774673  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:52.774919  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:52.774958  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:53.274692  390588 type.go:168] "Request Body" body=""
	I1213 10:44:53.274773  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:53.275105  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:53.773804  390588 type.go:168] "Request Body" body=""
	I1213 10:44:53.773878  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:53.774208  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:54.273740  390588 type.go:168] "Request Body" body=""
	I1213 10:44:54.273826  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:54.274090  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:54.773755  390588 type.go:168] "Request Body" body=""
	I1213 10:44:54.773834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:54.774176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:55.273871  390588 type.go:168] "Request Body" body=""
	I1213 10:44:55.273946  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:55.274266  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:55.274336  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:55.773682  390588 type.go:168] "Request Body" body=""
	I1213 10:44:55.773752  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:55.773998  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:56.273698  390588 type.go:168] "Request Body" body=""
	I1213 10:44:56.273771  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:56.274097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:56.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:44:56.773832  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:56.774157  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:57.273838  390588 type.go:168] "Request Body" body=""
	I1213 10:44:57.273924  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:57.274176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:57.773806  390588 type.go:168] "Request Body" body=""
	I1213 10:44:57.773928  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:57.774296  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:57.774354  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:58.273798  390588 type.go:168] "Request Body" body=""
	I1213 10:44:58.273873  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:58.274218  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:58.774470  390588 type.go:168] "Request Body" body=""
	I1213 10:44:58.774560  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:58.774811  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:59.179566  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:44:59.239921  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:59.239971  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:59.240057  390588 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:44:59.247585  390588 out.go:179] * Enabled addons: 
	I1213 10:44:59.249608  390588 addons.go:530] duration metric: took 1m38.479812026s for enable addons: enabled=[]
	I1213 10:44:59.274157  390588 type.go:168] "Request Body" body=""
	I1213 10:44:59.274255  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:59.274564  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:59.774339  390588 type.go:168] "Request Body" body=""
	I1213 10:44:59.774421  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:59.774764  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:59.774833  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:00.278749  390588 type.go:168] "Request Body" body=""
	I1213 10:45:00.278833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:00.279163  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:00.774212  390588 type.go:168] "Request Body" body=""
	I1213 10:45:00.774297  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:00.774688  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:01.274508  390588 type.go:168] "Request Body" body=""
	I1213 10:45:01.274605  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:01.274894  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:01.774686  390588 type.go:168] "Request Body" body=""
	I1213 10:45:01.774765  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:01.775087  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:01.775143  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:02.273808  390588 type.go:168] "Request Body" body=""
	I1213 10:45:02.273892  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:02.274240  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:02.773795  390588 type.go:168] "Request Body" body=""
	I1213 10:45:02.773879  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:02.774138  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:03.273769  390588 type.go:168] "Request Body" body=""
	I1213 10:45:03.273860  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:03.274233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:03.773792  390588 type.go:168] "Request Body" body=""
	I1213 10:45:03.773881  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:03.774233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:04.273949  390588 type.go:168] "Request Body" body=""
	I1213 10:45:04.274036  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:04.274352  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:04.274418  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:04.773787  390588 type.go:168] "Request Body" body=""
	I1213 10:45:04.773869  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:04.774175  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:05.273787  390588 type.go:168] "Request Body" body=""
	I1213 10:45:05.273859  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:05.274192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:05.773881  390588 type.go:168] "Request Body" body=""
	I1213 10:45:05.773957  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:05.774210  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:06.273726  390588 type.go:168] "Request Body" body=""
	I1213 10:45:06.273802  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:06.274127  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:06.773770  390588 type.go:168] "Request Body" body=""
	I1213 10:45:06.773852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:06.774202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:06.774260  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:07.273760  390588 type.go:168] "Request Body" body=""
	I1213 10:45:07.273836  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:07.274400  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:07.773790  390588 type.go:168] "Request Body" body=""
	I1213 10:45:07.773866  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:07.774207  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:08.273795  390588 type.go:168] "Request Body" body=""
	I1213 10:45:08.273920  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:08.274303  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:08.773655  390588 type.go:168] "Request Body" body=""
	I1213 10:45:08.773725  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:08.773989  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:09.273678  390588 type.go:168] "Request Body" body=""
	I1213 10:45:09.273758  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:09.274098  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:09.274153  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:09.773807  390588 type.go:168] "Request Body" body=""
	I1213 10:45:09.773902  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:09.774222  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:10.273946  390588 type.go:168] "Request Body" body=""
	I1213 10:45:10.274017  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:10.274269  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:10.774276  390588 type.go:168] "Request Body" body=""
	I1213 10:45:10.774349  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:10.774733  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:11.274712  390588 type.go:168] "Request Body" body=""
	I1213 10:45:11.274783  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:11.275094  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:11.275143  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:11.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:45:11.773801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:11.774126  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:12.273826  390588 type.go:168] "Request Body" body=""
	I1213 10:45:12.273930  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:12.274257  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:12.773940  390588 type.go:168] "Request Body" body=""
	I1213 10:45:12.774025  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:12.774370  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:13.273711  390588 type.go:168] "Request Body" body=""
	I1213 10:45:13.273799  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:13.274065  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:13.773788  390588 type.go:168] "Request Body" body=""
	I1213 10:45:13.773869  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:13.774187  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:13.774240  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:14.273793  390588 type.go:168] "Request Body" body=""
	I1213 10:45:14.273953  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:14.274293  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:14.773991  390588 type.go:168] "Request Body" body=""
	I1213 10:45:14.774073  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:14.774396  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:15.273772  390588 type.go:168] "Request Body" body=""
	I1213 10:45:15.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:15.274164  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:15.773820  390588 type.go:168] "Request Body" body=""
	I1213 10:45:15.773895  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:15.774219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:15.774280  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:16.274172  390588 type.go:168] "Request Body" body=""
	I1213 10:45:16.274247  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:16.280111  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1213 10:45:16.773739  390588 type.go:168] "Request Body" body=""
	I1213 10:45:16.773818  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:16.774141  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:17.273780  390588 type.go:168] "Request Body" body=""
	I1213 10:45:17.273862  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:17.274194  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:17.773721  390588 type.go:168] "Request Body" body=""
	I1213 10:45:17.773798  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:17.774048  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:18.273782  390588 type.go:168] "Request Body" body=""
	I1213 10:45:18.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:18.274213  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:18.274286  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:18.773986  390588 type.go:168] "Request Body" body=""
	I1213 10:45:18.774078  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:18.774398  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:19.273725  390588 type.go:168] "Request Body" body=""
	I1213 10:45:19.273802  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:19.274082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:19.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:45:19.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:19.774130  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:20.274061  390588 type.go:168] "Request Body" body=""
	I1213 10:45:20.274147  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:20.274521  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:20.274567  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:20.774429  390588 type.go:168] "Request Body" body=""
	I1213 10:45:20.774513  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:20.774784  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:21.274708  390588 type.go:168] "Request Body" body=""
	I1213 10:45:21.274788  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:21.275140  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:21.773809  390588 type.go:168] "Request Body" body=""
	I1213 10:45:21.773886  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:21.774230  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:22.273923  390588 type.go:168] "Request Body" body=""
	I1213 10:45:22.273995  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:22.274330  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:22.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:45:22.773836  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:22.774196  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:22.774266  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:23.273752  390588 type.go:168] "Request Body" body=""
	I1213 10:45:23.273825  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:23.274153  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:23.773854  390588 type.go:168] "Request Body" body=""
	I1213 10:45:23.773925  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:23.774184  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:24.273760  390588 type.go:168] "Request Body" body=""
	I1213 10:45:24.273837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:24.274228  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:24.773775  390588 type.go:168] "Request Body" body=""
	I1213 10:45:24.773852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:24.774188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:25.273932  390588 type.go:168] "Request Body" body=""
	I1213 10:45:25.274007  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:25.274270  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:25.274311  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:25.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:45:25.773835  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:25.774178  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:26.273929  390588 type.go:168] "Request Body" body=""
	I1213 10:45:26.274023  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:26.274342  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:26.774676  390588 type.go:168] "Request Body" body=""
	I1213 10:45:26.774744  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:26.774995  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:27.273699  390588 type.go:168] "Request Body" body=""
	I1213 10:45:27.273783  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:27.274109  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:27.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:45:27.773826  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:27.774163  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:27.774227  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:28.273715  390588 type.go:168] "Request Body" body=""
	I1213 10:45:28.273788  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:28.274057  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:28.773741  390588 type.go:168] "Request Body" body=""
	I1213 10:45:28.773816  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:28.774148  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:29.273858  390588 type.go:168] "Request Body" body=""
	I1213 10:45:29.273934  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:29.274250  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:29.773725  390588 type.go:168] "Request Body" body=""
	I1213 10:45:29.773794  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:29.774055  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:30.273773  390588 type.go:168] "Request Body" body=""
	I1213 10:45:30.273852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:30.274199  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:30.274260  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:30.774238  390588 type.go:168] "Request Body" body=""
	I1213 10:45:30.774312  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:30.774643  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:31.274550  390588 type.go:168] "Request Body" body=""
	I1213 10:45:31.274624  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:31.274882  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:31.774665  390588 type.go:168] "Request Body" body=""
	I1213 10:45:31.774738  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:31.775064  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:32.273753  390588 type.go:168] "Request Body" body=""
	I1213 10:45:32.273830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:32.274149  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:32.773762  390588 type.go:168] "Request Body" body=""
	I1213 10:45:32.773830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:32.774109  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:32.774151  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:33.273762  390588 type.go:168] "Request Body" body=""
	I1213 10:45:33.273841  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:33.274135  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:33.773816  390588 type.go:168] "Request Body" body=""
	I1213 10:45:33.773892  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:33.774227  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:34.274572  390588 type.go:168] "Request Body" body=""
	I1213 10:45:34.274643  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:34.274903  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:34.774657  390588 type.go:168] "Request Body" body=""
	I1213 10:45:34.774729  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:34.775082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:34.775152  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:35.273670  390588 type.go:168] "Request Body" body=""
	I1213 10:45:35.273759  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:35.274117  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:35.774407  390588 type.go:168] "Request Body" body=""
	I1213 10:45:35.774479  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:35.774771  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:36.274663  390588 type.go:168] "Request Body" body=""
	I1213 10:45:36.274756  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:36.275065  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:36.773806  390588 type.go:168] "Request Body" body=""
	I1213 10:45:36.773912  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:36.774265  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:37.273706  390588 type.go:168] "Request Body" body=""
	I1213 10:45:37.273778  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:37.274054  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:37.274104  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:37.773740  390588 type.go:168] "Request Body" body=""
	I1213 10:45:37.773842  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:37.774182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:38.273888  390588 type.go:168] "Request Body" body=""
	I1213 10:45:38.273961  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:38.274293  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:38.773975  390588 type.go:168] "Request Body" body=""
	I1213 10:45:38.774042  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:38.774302  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:39.273778  390588 type.go:168] "Request Body" body=""
	I1213 10:45:39.273861  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:39.274199  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:39.274262  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:39.773743  390588 type.go:168] "Request Body" body=""
	I1213 10:45:39.773824  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:39.774184  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:40.273728  390588 type.go:168] "Request Body" body=""
	I1213 10:45:40.273827  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:40.274144  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:40.774643  390588 type.go:168] "Request Body" body=""
	I1213 10:45:40.774717  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:40.775033  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:41.273691  390588 type.go:168] "Request Body" body=""
	I1213 10:45:41.273765  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:41.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:41.774405  390588 type.go:168] "Request Body" body=""
	I1213 10:45:41.774475  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:41.774789  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:41.774848  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:42.274590  390588 type.go:168] "Request Body" body=""
	I1213 10:45:42.274665  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:42.275006  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:42.773699  390588 type.go:168] "Request Body" body=""
	I1213 10:45:42.773775  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:42.774116  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:43.274417  390588 type.go:168] "Request Body" body=""
	I1213 10:45:43.274505  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:43.274764  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:43.774491  390588 type.go:168] "Request Body" body=""
	I1213 10:45:43.774561  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:43.774931  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:43.774985  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:44.274631  390588 type.go:168] "Request Body" body=""
	I1213 10:45:44.274716  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:44.275082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:44.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:45:44.773832  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:44.774086  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:45.273789  390588 type.go:168] "Request Body" body=""
	I1213 10:45:45.273877  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:45.274215  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:45.773938  390588 type.go:168] "Request Body" body=""
	I1213 10:45:45.774016  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:45.774370  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:46.274211  390588 type.go:168] "Request Body" body=""
	I1213 10:45:46.274311  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:46.274593  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:46.274641  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:46.774347  390588 type.go:168] "Request Body" body=""
	I1213 10:45:46.774423  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:46.774786  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:47.274591  390588 type.go:168] "Request Body" body=""
	I1213 10:45:47.274695  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:47.275064  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:47.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:45:47.773821  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:47.774076  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:48.273791  390588 type.go:168] "Request Body" body=""
	I1213 10:45:48.273871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:48.274221  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:48.773944  390588 type.go:168] "Request Body" body=""
	I1213 10:45:48.774025  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:48.774340  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:48.774398  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:49.273717  390588 type.go:168] "Request Body" body=""
	I1213 10:45:49.273796  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:49.274115  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:49.773760  390588 type.go:168] "Request Body" body=""
	I1213 10:45:49.773837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:49.774152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:50.273796  390588 type.go:168] "Request Body" body=""
	I1213 10:45:50.273881  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:50.274202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:50.774153  390588 type.go:168] "Request Body" body=""
	I1213 10:45:50.774227  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:50.774498  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:50.774547  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:51.274578  390588 type.go:168] "Request Body" body=""
	I1213 10:45:51.274657  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:51.274980  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:51.773696  390588 type.go:168] "Request Body" body=""
	I1213 10:45:51.773772  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:51.774097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:52.273712  390588 type.go:168] "Request Body" body=""
	I1213 10:45:52.273783  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:52.274044  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:52.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:45:52.773841  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:52.774214  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:53.273940  390588 type.go:168] "Request Body" body=""
	I1213 10:45:53.274028  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:53.274362  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:53.274420  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:53.773716  390588 type.go:168] "Request Body" body=""
	I1213 10:45:53.773788  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:53.774109  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:54.273804  390588 type.go:168] "Request Body" body=""
	I1213 10:45:54.273880  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:54.274211  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:54.773918  390588 type.go:168] "Request Body" body=""
	I1213 10:45:54.773996  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:54.774325  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:55.273749  390588 type.go:168] "Request Body" body=""
	I1213 10:45:55.273858  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:55.274197  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:55.773750  390588 type.go:168] "Request Body" body=""
	I1213 10:45:55.773829  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:55.774176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:55.774229  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:56.273954  390588 type.go:168] "Request Body" body=""
	I1213 10:45:56.274030  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:56.274368  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:56.774597  390588 type.go:168] "Request Body" body=""
	I1213 10:45:56.774681  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:56.775019  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:57.273757  390588 type.go:168] "Request Body" body=""
	I1213 10:45:57.273833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:57.274167  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:57.773886  390588 type.go:168] "Request Body" body=""
	I1213 10:45:57.773969  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:57.774297  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:57.774351  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:58.274008  390588 type.go:168] "Request Body" body=""
	I1213 10:45:58.274074  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:58.274328  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:58.773754  390588 type.go:168] "Request Body" body=""
	I1213 10:45:58.773845  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:58.774179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:59.273755  390588 type.go:168] "Request Body" body=""
	I1213 10:45:59.273831  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:59.274152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:59.773661  390588 type.go:168] "Request Body" body=""
	I1213 10:45:59.773729  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:59.773978  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:00.273779  390588 type.go:168] "Request Body" body=""
	I1213 10:46:00.273870  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:00.274207  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:00.274265  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:00.774194  390588 type.go:168] "Request Body" body=""
	I1213 10:46:00.774271  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:00.774577  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:01.274425  390588 type.go:168] "Request Body" body=""
	I1213 10:46:01.274499  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:01.274770  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:01.774648  390588 type.go:168] "Request Body" body=""
	I1213 10:46:01.774734  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:01.775108  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:02.273787  390588 type.go:168] "Request Body" body=""
	I1213 10:46:02.273866  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:02.274202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:02.773686  390588 type.go:168] "Request Body" body=""
	I1213 10:46:02.773753  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:02.774020  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:02.774062  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:03.273812  390588 type.go:168] "Request Body" body=""
	I1213 10:46:03.273890  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:03.274214  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:03.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:46:03.773844  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:03.774182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:04.274309  390588 type.go:168] "Request Body" body=""
	I1213 10:46:04.274379  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:04.274657  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:04.774430  390588 type.go:168] "Request Body" body=""
	I1213 10:46:04.774509  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:04.774864  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:04.774924  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:05.274540  390588 type.go:168] "Request Body" body=""
	I1213 10:46:05.274616  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:05.274963  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:05.773676  390588 type.go:168] "Request Body" body=""
	I1213 10:46:05.773758  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:05.774085  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:06.273969  390588 type.go:168] "Request Body" body=""
	I1213 10:46:06.274052  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:06.274459  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:06.773811  390588 type.go:168] "Request Body" body=""
	I1213 10:46:06.773902  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:06.774273  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:07.274619  390588 type.go:168] "Request Body" body=""
	I1213 10:46:07.274708  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:07.274974  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:07.275017  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:07.773671  390588 type.go:168] "Request Body" body=""
	I1213 10:46:07.773768  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:07.774117  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:08.273847  390588 type.go:168] "Request Body" body=""
	I1213 10:46:08.273925  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:08.274261  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:08.773957  390588 type.go:168] "Request Body" body=""
	I1213 10:46:08.774035  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:08.774397  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:09.273804  390588 type.go:168] "Request Body" body=""
	I1213 10:46:09.273894  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:09.274256  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:09.773968  390588 type.go:168] "Request Body" body=""
	I1213 10:46:09.774044  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:09.774403  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:09.774460  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:10.273719  390588 type.go:168] "Request Body" body=""
	I1213 10:46:10.273805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:10.274080  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:10.774136  390588 type.go:168] "Request Body" body=""
	I1213 10:46:10.774210  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:10.774536  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:11.274519  390588 type.go:168] "Request Body" body=""
	I1213 10:46:11.274594  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:11.274918  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:11.774397  390588 type.go:168] "Request Body" body=""
	I1213 10:46:11.774468  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:11.774832  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:11.774891  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:12.274659  390588 type.go:168] "Request Body" body=""
	I1213 10:46:12.274757  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:12.275082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:12.773782  390588 type.go:168] "Request Body" body=""
	I1213 10:46:12.773863  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:12.774233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:13.273921  390588 type.go:168] "Request Body" body=""
	I1213 10:46:13.273994  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:13.274258  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:13.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:46:13.773843  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:13.774234  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:14.273963  390588 type.go:168] "Request Body" body=""
	I1213 10:46:14.274066  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:14.274415  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:14.274474  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:14.773715  390588 type.go:168] "Request Body" body=""
	I1213 10:46:14.773793  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:14.774125  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:15.273806  390588 type.go:168] "Request Body" body=""
	I1213 10:46:15.273885  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:15.274220  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:15.773837  390588 type.go:168] "Request Body" body=""
	I1213 10:46:15.773921  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:15.774333  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:16.274096  390588 type.go:168] "Request Body" body=""
	I1213 10:46:16.274165  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:16.274517  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:16.274565  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:16.774276  390588 type.go:168] "Request Body" body=""
	I1213 10:46:16.774356  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:16.774701  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:17.274489  390588 type.go:168] "Request Body" body=""
	I1213 10:46:17.274563  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:17.274929  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:17.773641  390588 type.go:168] "Request Body" body=""
	I1213 10:46:17.773710  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:17.773957  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:18.274732  390588 type.go:168] "Request Body" body=""
	I1213 10:46:18.274812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:18.275153  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:18.275207  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:18.773906  390588 type.go:168] "Request Body" body=""
	I1213 10:46:18.773982  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:18.774326  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:19.274430  390588 type.go:168] "Request Body" body=""
	I1213 10:46:19.274528  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:19.274794  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:19.774601  390588 type.go:168] "Request Body" body=""
	I1213 10:46:19.774671  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:19.775003  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:20.273724  390588 type.go:168] "Request Body" body=""
	I1213 10:46:20.273806  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:20.274129  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:20.774125  390588 type.go:168] "Request Body" body=""
	I1213 10:46:20.774196  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:20.774577  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:20.774628  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:21.274424  390588 type.go:168] "Request Body" body=""
	I1213 10:46:21.274514  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:21.274834  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:21.774531  390588 type.go:168] "Request Body" body=""
	I1213 10:46:21.774612  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:21.774944  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:22.274640  390588 type.go:168] "Request Body" body=""
	I1213 10:46:22.274709  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:22.275021  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:22.774663  390588 type.go:168] "Request Body" body=""
	I1213 10:46:22.774773  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:22.775134  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:22.775197  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:23.273890  390588 type.go:168] "Request Body" body=""
	I1213 10:46:23.273971  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:23.274309  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:23.773717  390588 type.go:168] "Request Body" body=""
	I1213 10:46:23.773786  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:23.774083  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:24.273734  390588 type.go:168] "Request Body" body=""
	I1213 10:46:24.273813  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:24.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:24.773781  390588 type.go:168] "Request Body" body=""
	I1213 10:46:24.773855  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:24.774193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:25.274593  390588 type.go:168] "Request Body" body=""
	I1213 10:46:25.274667  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:25.274932  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:25.274974  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:25.773688  390588 type.go:168] "Request Body" body=""
	I1213 10:46:25.773769  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:25.774103  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:26.273715  390588 type.go:168] "Request Body" body=""
	I1213 10:46:26.273799  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:26.274187  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:26.773723  390588 type.go:168] "Request Body" body=""
	I1213 10:46:26.773803  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:26.774134  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:27.273777  390588 type.go:168] "Request Body" body=""
	I1213 10:46:27.273856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:27.274211  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:27.773942  390588 type.go:168] "Request Body" body=""
	I1213 10:46:27.774024  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:27.774376  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:27.774430  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:28.274709  390588 type.go:168] "Request Body" body=""
	I1213 10:46:28.274789  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:28.275064  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:28.773835  390588 type.go:168] "Request Body" body=""
	I1213 10:46:28.773920  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:28.774272  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:29.273759  390588 type.go:168] "Request Body" body=""
	I1213 10:46:29.273840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:29.274176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:29.774348  390588 type.go:168] "Request Body" body=""
	I1213 10:46:29.774419  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:29.774764  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:29.774820  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:30.274620  390588 type.go:168] "Request Body" body=""
	I1213 10:46:30.274696  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:30.275046  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:30.774640  390588 type.go:168] "Request Body" body=""
	I1213 10:46:30.774719  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:30.775077  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:31.273951  390588 type.go:168] "Request Body" body=""
	I1213 10:46:31.274026  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:31.274287  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:31.773775  390588 type.go:168] "Request Body" body=""
	I1213 10:46:31.773856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:31.774181  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:32.273795  390588 type.go:168] "Request Body" body=""
	I1213 10:46:32.273869  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:32.274211  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:32.274272  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:32.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:46:32.773801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:32.774050  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:33.273763  390588 type.go:168] "Request Body" body=""
	I1213 10:46:33.273841  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:33.274191  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:33.773932  390588 type.go:168] "Request Body" body=""
	I1213 10:46:33.774017  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:33.774448  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:34.273707  390588 type.go:168] "Request Body" body=""
	I1213 10:46:34.273777  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:34.274033  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:34.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:46:34.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:34.774164  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:34.774219  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:35.273760  390588 type.go:168] "Request Body" body=""
	I1213 10:46:35.273839  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:35.274188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:35.773757  390588 type.go:168] "Request Body" body=""
	I1213 10:46:35.773834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:35.774091  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:36.273704  390588 type.go:168] "Request Body" body=""
	I1213 10:46:36.273807  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:36.274146  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:36.773734  390588 type.go:168] "Request Body" body=""
	I1213 10:46:36.773812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:36.774138  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:37.273719  390588 type.go:168] "Request Body" body=""
	I1213 10:46:37.273806  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:37.274055  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:37.274109  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:37.773773  390588 type.go:168] "Request Body" body=""
	I1213 10:46:37.773850  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:37.774167  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:38.273869  390588 type.go:168] "Request Body" body=""
	I1213 10:46:38.273941  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:38.274257  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:38.774621  390588 type.go:168] "Request Body" body=""
	I1213 10:46:38.774711  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:38.774971  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:39.273720  390588 type.go:168] "Request Body" body=""
	I1213 10:46:39.273795  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:39.274130  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:39.274185  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:39.773882  390588 type.go:168] "Request Body" body=""
	I1213 10:46:39.773961  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:39.774280  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:40.273738  390588 type.go:168] "Request Body" body=""
	I1213 10:46:40.273832  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:40.274158  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:40.774749  390588 type.go:168] "Request Body" body=""
	I1213 10:46:40.774834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:40.775222  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:41.273940  390588 type.go:168] "Request Body" body=""
	I1213 10:46:41.274026  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:41.274347  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:41.274405  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:41.774636  390588 type.go:168] "Request Body" body=""
	I1213 10:46:41.774701  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:41.774952  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:42.273730  390588 type.go:168] "Request Body" body=""
	I1213 10:46:42.273828  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:42.274210  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:42.773953  390588 type.go:168] "Request Body" body=""
	I1213 10:46:42.774038  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:42.774405  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:43.274638  390588 type.go:168] "Request Body" body=""
	I1213 10:46:43.274705  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:43.274978  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:43.275016  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:43.773701  390588 type.go:168] "Request Body" body=""
	I1213 10:46:43.773806  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:43.774143  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:44.273888  390588 type.go:168] "Request Body" body=""
	I1213 10:46:44.273989  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:44.274363  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:44.774070  390588 type.go:168] "Request Body" body=""
	I1213 10:46:44.774138  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:44.774399  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:45.273823  390588 type.go:168] "Request Body" body=""
	I1213 10:46:45.273898  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:45.274268  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:45.773995  390588 type.go:168] "Request Body" body=""
	I1213 10:46:45.774070  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:45.774394  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:45.774448  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:46.274246  390588 type.go:168] "Request Body" body=""
	I1213 10:46:46.274313  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:46.274596  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:46.774345  390588 type.go:168] "Request Body" body=""
	I1213 10:46:46.774417  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:46.774765  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:47.274423  390588 type.go:168] "Request Body" body=""
	I1213 10:46:47.274522  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:47.274846  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:47.774170  390588 type.go:168] "Request Body" body=""
	I1213 10:46:47.774241  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:47.774544  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:47.774600  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:48.274170  390588 type.go:168] "Request Body" body=""
	I1213 10:46:48.274257  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:48.274614  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:48.774460  390588 type.go:168] "Request Body" body=""
	I1213 10:46:48.774547  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:48.774903  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:49.274601  390588 type.go:168] "Request Body" body=""
	I1213 10:46:49.274681  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:49.274964  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:49.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:46:49.773817  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:49.774156  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:50.273855  390588 type.go:168] "Request Body" body=""
	I1213 10:46:50.273935  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:50.274285  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:50.274341  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:50.774135  390588 type.go:168] "Request Body" body=""
	I1213 10:46:50.774202  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:50.774454  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:51.274467  390588 type.go:168] "Request Body" body=""
	I1213 10:46:51.274552  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:51.274884  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:51.774669  390588 type.go:168] "Request Body" body=""
	I1213 10:46:51.774754  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:51.775052  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:52.273723  390588 type.go:168] "Request Body" body=""
	I1213 10:46:52.273794  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:52.274094  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:52.773761  390588 type.go:168] "Request Body" body=""
	I1213 10:46:52.773837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:52.774189  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:52.774245  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:53.273910  390588 type.go:168] "Request Body" body=""
	I1213 10:46:53.273985  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:53.274313  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:53.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:46:53.773801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:53.774114  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:54.273799  390588 type.go:168] "Request Body" body=""
	I1213 10:46:54.273883  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:54.274242  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:54.773831  390588 type.go:168] "Request Body" body=""
	I1213 10:46:54.773908  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:54.774273  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:54.774330  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:55.273935  390588 type.go:168] "Request Body" body=""
	I1213 10:46:55.274002  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:55.274280  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:55.773763  390588 type.go:168] "Request Body" body=""
	I1213 10:46:55.773841  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:55.774166  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:56.273719  390588 type.go:168] "Request Body" body=""
	I1213 10:46:56.273793  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:56.274128  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:56.774284  390588 type.go:168] "Request Body" body=""
	I1213 10:46:56.774353  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:56.774609  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:56.774649  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:57.274349  390588 type.go:168] "Request Body" body=""
	I1213 10:46:57.274429  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:57.274756  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:57.774568  390588 type.go:168] "Request Body" body=""
	I1213 10:46:57.774644  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:57.774981  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:58.274491  390588 type.go:168] "Request Body" body=""
	I1213 10:46:58.274570  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:58.274873  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:58.774677  390588 type.go:168] "Request Body" body=""
	I1213 10:46:58.774750  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:58.775093  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:58.775146  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:59.273671  390588 type.go:168] "Request Body" body=""
	I1213 10:46:59.273746  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:59.274092  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:59.773709  390588 type.go:168] "Request Body" body=""
	I1213 10:46:59.773787  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:59.774109  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:00.273858  390588 type.go:168] "Request Body" body=""
	I1213 10:47:00.273965  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:00.274284  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:00.774431  390588 type.go:168] "Request Body" body=""
	I1213 10:47:00.774530  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:00.774877  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:01.273680  390588 type.go:168] "Request Body" body=""
	I1213 10:47:01.273746  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:01.274056  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:01.274104  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:01.773802  390588 type.go:168] "Request Body" body=""
	I1213 10:47:01.773895  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:01.774231  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:02.273805  390588 type.go:168] "Request Body" body=""
	I1213 10:47:02.273883  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:02.274188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:02.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:47:02.773820  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:02.774149  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:03.273795  390588 type.go:168] "Request Body" body=""
	I1213 10:47:03.273876  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:03.274215  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:03.274268  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:03.773789  390588 type.go:168] "Request Body" body=""
	I1213 10:47:03.773879  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:03.774219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:04.274436  390588 type.go:168] "Request Body" body=""
	I1213 10:47:04.274533  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:04.274808  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:04.774597  390588 type.go:168] "Request Body" body=""
	I1213 10:47:04.774676  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:04.775027  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:05.273736  390588 type.go:168] "Request Body" body=""
	I1213 10:47:05.273815  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:05.274179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:05.773856  390588 type.go:168] "Request Body" body=""
	I1213 10:47:05.773934  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:05.774190  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:05.774242  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:06.273720  390588 type.go:168] "Request Body" body=""
	I1213 10:47:06.273796  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:06.274139  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:06.773856  390588 type.go:168] "Request Body" body=""
	I1213 10:47:06.773936  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:06.774268  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:07.274469  390588 type.go:168] "Request Body" body=""
	I1213 10:47:07.274550  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:07.274856  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:07.774641  390588 type.go:168] "Request Body" body=""
	I1213 10:47:07.774724  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:07.775047  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:07.775098  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:08.273769  390588 type.go:168] "Request Body" body=""
	I1213 10:47:08.273853  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:08.274179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:08.773674  390588 type.go:168] "Request Body" body=""
	I1213 10:47:08.773747  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:08.773993  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:09.273756  390588 type.go:168] "Request Body" body=""
	I1213 10:47:09.273885  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:09.274246  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:09.773763  390588 type.go:168] "Request Body" body=""
	I1213 10:47:09.773845  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:09.774186  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:10.274330  390588 type.go:168] "Request Body" body=""
	I1213 10:47:10.274409  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:10.274689  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:10.274730  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:10.774642  390588 type.go:168] "Request Body" body=""
	I1213 10:47:10.774724  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:10.775070  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:11.273743  390588 type.go:168] "Request Body" body=""
	I1213 10:47:11.273826  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:11.274166  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:11.773673  390588 type.go:168] "Request Body" body=""
	I1213 10:47:11.773751  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:11.774001  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:12.273773  390588 type.go:168] "Request Body" body=""
	I1213 10:47:12.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:12.274233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:12.773795  390588 type.go:168] "Request Body" body=""
	I1213 10:47:12.773878  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:12.774221  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:12.774276  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:13.273922  390588 type.go:168] "Request Body" body=""
	I1213 10:47:13.273993  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:13.274301  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:13.773767  390588 type.go:168] "Request Body" body=""
	I1213 10:47:13.773837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:13.774158  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:14.273877  390588 type.go:168] "Request Body" body=""
	I1213 10:47:14.273952  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:14.274297  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:14.773969  390588 type.go:168] "Request Body" body=""
	I1213 10:47:14.774038  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:14.774294  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:14.774335  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:15.273792  390588 type.go:168] "Request Body" body=""
	I1213 10:47:15.273867  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:15.274192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:15.773783  390588 type.go:168] "Request Body" body=""
	I1213 10:47:15.773859  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:15.774205  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:16.273875  390588 type.go:168] "Request Body" body=""
	I1213 10:47:16.273951  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:16.274219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:16.773783  390588 type.go:168] "Request Body" body=""
	I1213 10:47:16.773856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:16.775023  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1213 10:47:16.775086  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:17.273732  390588 type.go:168] "Request Body" body=""
	I1213 10:47:17.273805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:17.274097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:17.773664  390588 type.go:168] "Request Body" body=""
	I1213 10:47:17.773749  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:17.774040  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:18.273790  390588 type.go:168] "Request Body" body=""
	I1213 10:47:18.273880  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:18.274223  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:18.773754  390588 type.go:168] "Request Body" body=""
	I1213 10:47:18.773831  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:18.774146  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:19.273714  390588 type.go:168] "Request Body" body=""
	I1213 10:47:19.273784  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:19.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:19.274151  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:19.773784  390588 type.go:168] "Request Body" body=""
	I1213 10:47:19.773873  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:19.774244  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:20.273959  390588 type.go:168] "Request Body" body=""
	I1213 10:47:20.274044  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:20.274394  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:20.774250  390588 type.go:168] "Request Body" body=""
	I1213 10:47:20.774369  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:20.774676  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:21.274708  390588 type.go:168] "Request Body" body=""
	I1213 10:47:21.274781  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:21.275080  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:21.275128  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:21.773729  390588 type.go:168] "Request Body" body=""
	I1213 10:47:21.773812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:21.774174  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:22.273722  390588 type.go:168] "Request Body" body=""
	I1213 10:47:22.273821  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:22.274131  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:22.773835  390588 type.go:168] "Request Body" body=""
	I1213 10:47:22.773910  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:22.774224  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:23.273772  390588 type.go:168] "Request Body" body=""
	I1213 10:47:23.273864  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:23.274153  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:23.774583  390588 type.go:168] "Request Body" body=""
	I1213 10:47:23.774658  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:23.774922  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:23.774974  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:24.274727  390588 type.go:168] "Request Body" body=""
	I1213 10:47:24.274797  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:24.275112  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:24.773773  390588 type.go:168] "Request Body" body=""
	I1213 10:47:24.773868  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:24.774190  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:25.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:47:25.273794  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:25.274148  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:25.773763  390588 type.go:168] "Request Body" body=""
	I1213 10:47:25.773845  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:25.774201  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:26.273894  390588 type.go:168] "Request Body" body=""
	I1213 10:47:26.273970  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:26.274304  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:26.274358  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:26.773709  390588 type.go:168] "Request Body" body=""
	I1213 10:47:26.773784  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:26.774082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:27.273776  390588 type.go:168] "Request Body" body=""
	I1213 10:47:27.273856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:27.274198  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:27.773769  390588 type.go:168] "Request Body" body=""
	I1213 10:47:27.773862  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:27.774181  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:28.273908  390588 type.go:168] "Request Body" body=""
	I1213 10:47:28.273980  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:28.274246  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:28.773791  390588 type.go:168] "Request Body" body=""
	I1213 10:47:28.773871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:28.774221  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:28.774280  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:29.273783  390588 type.go:168] "Request Body" body=""
	I1213 10:47:29.273866  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:29.274195  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:29.773879  390588 type.go:168] "Request Body" body=""
	I1213 10:47:29.773954  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:29.774220  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:30.273792  390588 type.go:168] "Request Body" body=""
	I1213 10:47:30.273887  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:30.274239  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:30.774640  390588 type.go:168] "Request Body" body=""
	I1213 10:47:30.774719  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:30.775063  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:30.775117  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:31.273664  390588 type.go:168] "Request Body" body=""
	I1213 10:47:31.273730  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:31.273976  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:31.773680  390588 type.go:168] "Request Body" body=""
	I1213 10:47:31.773753  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:31.774074  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:32.273770  390588 type.go:168] "Request Body" body=""
	I1213 10:47:32.273856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:32.274200  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:32.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:47:32.773840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:32.774155  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:33.273743  390588 type.go:168] "Request Body" body=""
	I1213 10:47:33.273816  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:33.274165  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:33.274237  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:33.773778  390588 type.go:168] "Request Body" body=""
	I1213 10:47:33.773853  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:33.774193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:34.273877  390588 type.go:168] "Request Body" body=""
	I1213 10:47:34.273952  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:34.274209  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:34.773757  390588 type.go:168] "Request Body" body=""
	I1213 10:47:34.773829  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:34.774154  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:35.273734  390588 type.go:168] "Request Body" body=""
	I1213 10:47:35.273810  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:35.274170  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:35.773845  390588 type.go:168] "Request Body" body=""
	I1213 10:47:35.773920  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:35.774173  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:35.774222  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:36.273675  390588 type.go:168] "Request Body" body=""
	I1213 10:47:36.273750  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:36.274088  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:36.773810  390588 type.go:168] "Request Body" body=""
	I1213 10:47:36.773886  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:36.774215  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:37.273714  390588 type.go:168] "Request Body" body=""
	I1213 10:47:37.273797  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:37.274138  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:37.773767  390588 type.go:168] "Request Body" body=""
	I1213 10:47:37.773861  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:37.774225  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:37.774283  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:38.273949  390588 type.go:168] "Request Body" body=""
	I1213 10:47:38.274035  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:38.274379  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:38.774693  390588 type.go:168] "Request Body" body=""
	I1213 10:47:38.774771  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:38.775056  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:39.273772  390588 type.go:168] "Request Body" body=""
	I1213 10:47:39.273858  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:39.274236  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:39.773832  390588 type.go:168] "Request Body" body=""
	I1213 10:47:39.773906  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:39.774253  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:39.774308  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:40.274521  390588 type.go:168] "Request Body" body=""
	I1213 10:47:40.274596  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:40.274862  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:40.774685  390588 type.go:168] "Request Body" body=""
	I1213 10:47:40.774759  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:40.775099  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:41.273778  390588 type.go:168] "Request Body" body=""
	I1213 10:47:41.273854  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:41.274171  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:41.773727  390588 type.go:168] "Request Body" body=""
	I1213 10:47:41.773800  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:41.774113  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:42.273838  390588 type.go:168] "Request Body" body=""
	I1213 10:47:42.273925  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:42.274281  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:42.274339  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:42.773878  390588 type.go:168] "Request Body" body=""
	I1213 10:47:42.773968  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:42.774283  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:43.273946  390588 type.go:168] "Request Body" body=""
	I1213 10:47:43.274019  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:43.274334  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:43.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:47:43.773829  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:43.774150  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:44.273757  390588 type.go:168] "Request Body" body=""
	I1213 10:47:44.273838  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:44.274183  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:44.773782  390588 type.go:168] "Request Body" body=""
	I1213 10:47:44.773864  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:44.774198  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:44.774253  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:45.273924  390588 type.go:168] "Request Body" body=""
	I1213 10:47:45.274004  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:45.274419  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:45.773843  390588 type.go:168] "Request Body" body=""
	I1213 10:47:45.773923  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:45.774295  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:46.273961  390588 type.go:168] "Request Body" body=""
	I1213 10:47:46.274029  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:46.274287  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:46.773789  390588 type.go:168] "Request Body" body=""
	I1213 10:47:46.773869  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:46.774227  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:46.774283  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:47.273961  390588 type.go:168] "Request Body" body=""
	I1213 10:47:47.274043  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:47.274393  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:47.773713  390588 type.go:168] "Request Body" body=""
	I1213 10:47:47.773795  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:47.774076  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:48.273777  390588 type.go:168] "Request Body" body=""
	I1213 10:47:48.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:48.274213  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:48.773914  390588 type.go:168] "Request Body" body=""
	I1213 10:47:48.773990  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:48.774305  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:48.774364  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:49.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:47:49.273791  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:49.274082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:49.773785  390588 type.go:168] "Request Body" body=""
	I1213 10:47:49.773866  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:49.774184  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:50.273769  390588 type.go:168] "Request Body" body=""
	I1213 10:47:50.273849  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:50.274190  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:50.774233  390588 type.go:168] "Request Body" body=""
	I1213 10:47:50.774309  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:50.774588  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:50.774631  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:51.274650  390588 type.go:168] "Request Body" body=""
	I1213 10:47:51.274724  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:51.275059  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:51.773796  390588 type.go:168] "Request Body" body=""
	I1213 10:47:51.773878  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:51.774236  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:52.274456  390588 type.go:168] "Request Body" body=""
	I1213 10:47:52.274538  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:52.274799  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:52.774588  390588 type.go:168] "Request Body" body=""
	I1213 10:47:52.774666  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:52.775007  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:52.775061  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:53.273753  390588 type.go:168] "Request Body" body=""
	I1213 10:47:53.273833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:53.274191  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:53.773675  390588 type.go:168] "Request Body" body=""
	I1213 10:47:53.773745  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:53.774008  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:54.273722  390588 type.go:168] "Request Body" body=""
	I1213 10:47:54.273801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:54.274131  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:54.773868  390588 type.go:168] "Request Body" body=""
	I1213 10:47:54.773943  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:54.774296  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:55.273989  390588 type.go:168] "Request Body" body=""
	I1213 10:47:55.274065  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:55.274332  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:55.274372  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:55.774037  390588 type.go:168] "Request Body" body=""
	I1213 10:47:55.774114  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:55.774457  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:56.274294  390588 type.go:168] "Request Body" body=""
	I1213 10:47:56.274368  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:56.274696  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:56.774209  390588 type.go:168] "Request Body" body=""
	I1213 10:47:56.774284  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:56.774573  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:57.274365  390588 type.go:168] "Request Body" body=""
	I1213 10:47:57.274443  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:57.274796  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:57.274856  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:57.774615  390588 type.go:168] "Request Body" body=""
	I1213 10:47:57.774691  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:57.775029  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:58.274293  390588 type.go:168] "Request Body" body=""
	I1213 10:47:58.274363  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:58.274642  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:58.774411  390588 type.go:168] "Request Body" body=""
	I1213 10:47:58.774519  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:58.774841  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:59.274495  390588 type.go:168] "Request Body" body=""
	I1213 10:47:59.274571  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:59.274905  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:59.274961  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:59.774120  390588 type.go:168] "Request Body" body=""
	I1213 10:47:59.774186  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:59.774529  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:00.274587  390588 type.go:168] "Request Body" body=""
	I1213 10:48:00.274674  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:00.275002  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:00.773691  390588 type.go:168] "Request Body" body=""
	I1213 10:48:00.773785  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:00.774128  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:01.273694  390588 type.go:168] "Request Body" body=""
	I1213 10:48:01.273766  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:01.274084  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:01.773820  390588 type.go:168] "Request Body" body=""
	I1213 10:48:01.773905  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:01.774301  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:01.774362  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:02.273866  390588 type.go:168] "Request Body" body=""
	I1213 10:48:02.273943  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:02.274265  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:02.773719  390588 type.go:168] "Request Body" body=""
	I1213 10:48:02.773929  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:02.774221  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:03.273773  390588 type.go:168] "Request Body" body=""
	I1213 10:48:03.273855  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:03.274182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:03.773768  390588 type.go:168] "Request Body" body=""
	I1213 10:48:03.773848  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:03.774192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:04.274348  390588 type.go:168] "Request Body" body=""
	I1213 10:48:04.274421  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:04.274701  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:04.274747  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:04.774520  390588 type.go:168] "Request Body" body=""
	I1213 10:48:04.774598  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:04.774955  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:05.274625  390588 type.go:168] "Request Body" body=""
	I1213 10:48:05.274699  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:05.275061  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:05.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:48:05.773840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:05.774191  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:06.273741  390588 type.go:168] "Request Body" body=""
	I1213 10:48:06.273822  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:06.274167  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:06.773880  390588 type.go:168] "Request Body" body=""
	I1213 10:48:06.773956  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:06.774280  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:06.774339  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:07.273666  390588 type.go:168] "Request Body" body=""
	I1213 10:48:07.273739  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:07.274015  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:07.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:48:07.773867  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:07.774227  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:08.273802  390588 type.go:168] "Request Body" body=""
	I1213 10:48:08.273887  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:08.274253  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:08.774404  390588 type.go:168] "Request Body" body=""
	I1213 10:48:08.774472  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:08.774731  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:08.774771  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:09.274521  390588 type.go:168] "Request Body" body=""
	I1213 10:48:09.274602  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:09.274979  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:09.774731  390588 type.go:168] "Request Body" body=""
	I1213 10:48:09.774819  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:09.775148  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:10.274501  390588 type.go:168] "Request Body" body=""
	I1213 10:48:10.274577  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:10.274825  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:10.774685  390588 type.go:168] "Request Body" body=""
	I1213 10:48:10.774760  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:10.775071  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:10.775127  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:11.273657  390588 type.go:168] "Request Body" body=""
	I1213 10:48:11.273737  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:11.274080  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:11.774554  390588 type.go:168] "Request Body" body=""
	I1213 10:48:11.774619  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:11.774916  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:12.274606  390588 type.go:168] "Request Body" body=""
	I1213 10:48:12.274685  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:12.275008  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:12.773772  390588 type.go:168] "Request Body" body=""
	I1213 10:48:12.773849  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:12.774196  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:13.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:48:13.273790  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:13.274085  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:13.274132  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:13.773694  390588 type.go:168] "Request Body" body=""
	I1213 10:48:13.773768  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:13.774050  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:14.273699  390588 type.go:168] "Request Body" body=""
	I1213 10:48:14.273776  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:14.274097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:14.773688  390588 type.go:168] "Request Body" body=""
	I1213 10:48:14.773757  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:14.774016  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:15.273762  390588 type.go:168] "Request Body" body=""
	I1213 10:48:15.273837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:15.274160  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:15.274217  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:15.773796  390588 type.go:168] "Request Body" body=""
	I1213 10:48:15.773874  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:15.774220  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:16.273918  390588 type.go:168] "Request Body" body=""
	I1213 10:48:16.274004  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:16.274258  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:16.773913  390588 type.go:168] "Request Body" body=""
	I1213 10:48:16.773993  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:16.774333  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:17.273914  390588 type.go:168] "Request Body" body=""
	I1213 10:48:17.273989  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:17.274304  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:17.274360  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:17.773705  390588 type.go:168] "Request Body" body=""
	I1213 10:48:17.773779  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:17.774047  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:18.273778  390588 type.go:168] "Request Body" body=""
	I1213 10:48:18.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:18.274175  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:18.773780  390588 type.go:168] "Request Body" body=""
	I1213 10:48:18.773874  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:18.774242  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:19.274520  390588 type.go:168] "Request Body" body=""
	I1213 10:48:19.274589  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:19.274852  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:19.274893  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:19.774642  390588 type.go:168] "Request Body" body=""
	I1213 10:48:19.774722  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:19.775081  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:20.273688  390588 type.go:168] "Request Body" body=""
	I1213 10:48:20.273761  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:20.274090  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:20.773877  390588 type.go:168] "Request Body" body=""
	I1213 10:48:20.773951  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:20.774252  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:21.274225  390588 type.go:168] "Request Body" body=""
	I1213 10:48:21.274303  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:21.274658  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:21.774461  390588 type.go:168] "Request Body" body=""
	I1213 10:48:21.774542  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:21.774931  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:21.774990  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:22.273646  390588 type.go:168] "Request Body" body=""
	I1213 10:48:22.273719  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:22.273971  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:22.773678  390588 type.go:168] "Request Body" body=""
	I1213 10:48:22.773773  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:22.774157  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:23.273879  390588 type.go:168] "Request Body" body=""
	I1213 10:48:23.273951  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:23.274270  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:23.774466  390588 type.go:168] "Request Body" body=""
	I1213 10:48:23.774555  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:23.774828  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:24.274703  390588 type.go:168] "Request Body" body=""
	I1213 10:48:24.274778  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:24.275113  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:24.275166  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:24.773777  390588 type.go:168] "Request Body" body=""
	I1213 10:48:24.773853  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:24.774193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:25.273716  390588 type.go:168] "Request Body" body=""
	I1213 10:48:25.273787  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:25.274055  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:25.773749  390588 type.go:168] "Request Body" body=""
	I1213 10:48:25.773830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:25.774156  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:26.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:48:26.273812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:26.274134  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:26.774405  390588 type.go:168] "Request Body" body=""
	I1213 10:48:26.774477  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:26.774735  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:26.774777  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:27.274550  390588 type.go:168] "Request Body" body=""
	I1213 10:48:27.274638  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:27.274990  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:27.773699  390588 type.go:168] "Request Body" body=""
	I1213 10:48:27.773775  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:27.774125  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:28.274454  390588 type.go:168] "Request Body" body=""
	I1213 10:48:28.274531  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:28.274852  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:28.774642  390588 type.go:168] "Request Body" body=""
	I1213 10:48:28.774713  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:28.775023  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:28.775072  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:29.273765  390588 type.go:168] "Request Body" body=""
	I1213 10:48:29.273840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:29.274166  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:29.773685  390588 type.go:168] "Request Body" body=""
	I1213 10:48:29.773767  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:29.774067  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:30.273776  390588 type.go:168] "Request Body" body=""
	I1213 10:48:30.273858  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:30.274172  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:30.773721  390588 type.go:168] "Request Body" body=""
	I1213 10:48:30.773801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:30.774182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:31.273888  390588 type.go:168] "Request Body" body=""
	I1213 10:48:31.273960  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:31.274245  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:31.274287  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:31.773960  390588 type.go:168] "Request Body" body=""
	I1213 10:48:31.774033  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:31.774353  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:32.273790  390588 type.go:168] "Request Body" body=""
	I1213 10:48:32.273874  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:32.274212  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:32.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:48:32.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:32.774110  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:33.273773  390588 type.go:168] "Request Body" body=""
	I1213 10:48:33.273854  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:33.274167  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:33.773768  390588 type.go:168] "Request Body" body=""
	I1213 10:48:33.773850  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:33.774195  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:33.774250  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:34.274441  390588 type.go:168] "Request Body" body=""
	I1213 10:48:34.274551  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:34.274859  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:34.774530  390588 type.go:168] "Request Body" body=""
	I1213 10:48:34.774653  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:34.774994  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:35.273709  390588 type.go:168] "Request Body" body=""
	I1213 10:48:35.273790  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:35.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:35.773804  390588 type.go:168] "Request Body" body=""
	I1213 10:48:35.773871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:35.774121  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:36.273709  390588 type.go:168] "Request Body" body=""
	I1213 10:48:36.273787  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:36.274129  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:36.274191  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:36.773868  390588 type.go:168] "Request Body" body=""
	I1213 10:48:36.773953  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:36.774291  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:37.273713  390588 type.go:168] "Request Body" body=""
	I1213 10:48:37.273782  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:37.274052  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:37.773728  390588 type.go:168] "Request Body" body=""
	I1213 10:48:37.773807  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:37.774133  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:38.273695  390588 type.go:168] "Request Body" body=""
	I1213 10:48:38.273771  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:38.274096  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:38.774434  390588 type.go:168] "Request Body" body=""
	I1213 10:48:38.774523  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:38.774857  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:38.774915  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:39.274697  390588 type.go:168] "Request Body" body=""
	I1213 10:48:39.274775  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:39.275116  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:39.773799  390588 type.go:168] "Request Body" body=""
	I1213 10:48:39.773875  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:39.774219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:40.274392  390588 type.go:168] "Request Body" body=""
	I1213 10:48:40.274461  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:40.274778  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:40.774600  390588 type.go:168] "Request Body" body=""
	I1213 10:48:40.774675  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:40.774999  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:40.775056  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:41.273683  390588 type.go:168] "Request Body" body=""
	I1213 10:48:41.273758  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:41.274099  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:41.774223  390588 type.go:168] "Request Body" body=""
	I1213 10:48:41.774306  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:41.774579  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:42.274405  390588 type.go:168] "Request Body" body=""
	I1213 10:48:42.274535  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:42.274934  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:42.774574  390588 type.go:168] "Request Body" body=""
	I1213 10:48:42.774658  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:42.775003  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:43.273697  390588 type.go:168] "Request Body" body=""
	I1213 10:48:43.273772  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:43.274034  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:43.274076  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:43.773741  390588 type.go:168] "Request Body" body=""
	I1213 10:48:43.773825  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:43.774164  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:44.273866  390588 type.go:168] "Request Body" body=""
	I1213 10:48:44.273947  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:44.274284  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:44.773701  390588 type.go:168] "Request Body" body=""
	I1213 10:48:44.773793  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:44.774141  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:45.273825  390588 type.go:168] "Request Body" body=""
	I1213 10:48:45.273925  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:45.274348  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:45.274406  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:45.774078  390588 type.go:168] "Request Body" body=""
	I1213 10:48:45.774155  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:45.774567  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:46.274333  390588 type.go:168] "Request Body" body=""
	I1213 10:48:46.274401  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:46.274668  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:46.774394  390588 type.go:168] "Request Body" body=""
	I1213 10:48:46.774466  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:46.774810  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:47.274617  390588 type.go:168] "Request Body" body=""
	I1213 10:48:47.274705  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:47.275033  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:47.275083  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:47.774292  390588 type.go:168] "Request Body" body=""
	I1213 10:48:47.774364  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:47.774696  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:48.274508  390588 type.go:168] "Request Body" body=""
	I1213 10:48:48.274590  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:48.274935  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:48.774610  390588 type.go:168] "Request Body" body=""
	I1213 10:48:48.774685  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:48.775020  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:49.273715  390588 type.go:168] "Request Body" body=""
	I1213 10:48:49.273781  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:49.274042  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:49.773747  390588 type.go:168] "Request Body" body=""
	I1213 10:48:49.773829  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:49.774155  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:49.774228  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:50.273926  390588 type.go:168] "Request Body" body=""
	I1213 10:48:50.274002  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:50.274364  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:50.774202  390588 type.go:168] "Request Body" body=""
	I1213 10:48:50.774276  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:50.774536  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:51.274422  390588 type.go:168] "Request Body" body=""
	I1213 10:48:51.274498  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:51.274822  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:51.774623  390588 type.go:168] "Request Body" body=""
	I1213 10:48:51.774699  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:51.775050  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:51.775104  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:52.273779  390588 type.go:168] "Request Body" body=""
	I1213 10:48:52.273845  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:52.274097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:52.773759  390588 type.go:168] "Request Body" body=""
	I1213 10:48:52.773834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:52.774161  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:53.273848  390588 type.go:168] "Request Body" body=""
	I1213 10:48:53.273927  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:53.274265  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:53.773713  390588 type.go:168] "Request Body" body=""
	I1213 10:48:53.773788  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:53.774090  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:54.273763  390588 type.go:168] "Request Body" body=""
	I1213 10:48:54.273840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:54.274182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:54.274238  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:54.773755  390588 type.go:168] "Request Body" body=""
	I1213 10:48:54.773839  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:54.774143  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:55.273671  390588 type.go:168] "Request Body" body=""
	I1213 10:48:55.273739  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:55.273994  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:55.773662  390588 type.go:168] "Request Body" body=""
	I1213 10:48:55.773743  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:55.774113  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:56.274020  390588 type.go:168] "Request Body" body=""
	I1213 10:48:56.274092  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:56.274398  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:56.274455  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:56.773718  390588 type.go:168] "Request Body" body=""
	I1213 10:48:56.773786  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:56.774114  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:57.273796  390588 type.go:168] "Request Body" body=""
	I1213 10:48:57.273875  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:57.274202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:57.773898  390588 type.go:168] "Request Body" body=""
	I1213 10:48:57.773979  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:57.774308  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:58.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:48:58.273790  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:58.274114  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:58.773788  390588 type.go:168] "Request Body" body=""
	I1213 10:48:58.773908  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:58.774247  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:58.774302  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:59.273809  390588 type.go:168] "Request Body" body=""
	I1213 10:48:59.273892  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:59.274236  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:59.773708  390588 type.go:168] "Request Body" body=""
	I1213 10:48:59.773786  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:59.774102  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:00.273835  390588 type.go:168] "Request Body" body=""
	I1213 10:49:00.273945  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:00.274259  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:00.774386  390588 type.go:168] "Request Body" body=""
	I1213 10:49:00.774468  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:00.774788  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:00.774843  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:01.274715  390588 type.go:168] "Request Body" body=""
	I1213 10:49:01.274784  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:01.275080  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:01.773784  390588 type.go:168] "Request Body" body=""
	I1213 10:49:01.773863  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:01.774155  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:02.273798  390588 type.go:168] "Request Body" body=""
	I1213 10:49:02.273897  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:02.274252  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:02.773815  390588 type.go:168] "Request Body" body=""
	I1213 10:49:02.773883  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:02.774152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:03.273838  390588 type.go:168] "Request Body" body=""
	I1213 10:49:03.273923  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:03.274294  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:03.274348  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:03.773866  390588 type.go:168] "Request Body" body=""
	I1213 10:49:03.773946  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:03.774285  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:04.273977  390588 type.go:168] "Request Body" body=""
	I1213 10:49:04.274050  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:04.274314  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:04.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:49:04.773838  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:04.774178  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:05.273888  390588 type.go:168] "Request Body" body=""
	I1213 10:49:05.273962  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:05.274293  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:05.773962  390588 type.go:168] "Request Body" body=""
	I1213 10:49:05.774033  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:05.774279  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:05.774317  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:06.274277  390588 type.go:168] "Request Body" body=""
	I1213 10:49:06.274357  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:06.274684  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:06.774350  390588 type.go:168] "Request Body" body=""
	I1213 10:49:06.774429  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:06.774754  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:07.274072  390588 type.go:168] "Request Body" body=""
	I1213 10:49:07.274145  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:07.274401  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:07.773761  390588 type.go:168] "Request Body" body=""
	I1213 10:49:07.773839  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:07.774168  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:08.273771  390588 type.go:168] "Request Body" body=""
	I1213 10:49:08.273852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:08.274170  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:08.274229  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:08.773716  390588 type.go:168] "Request Body" body=""
	I1213 10:49:08.773793  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:08.774102  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:09.273765  390588 type.go:168] "Request Body" body=""
	I1213 10:49:09.273840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:09.274179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:09.773911  390588 type.go:168] "Request Body" body=""
	I1213 10:49:09.773987  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:09.774329  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:10.274643  390588 type.go:168] "Request Body" body=""
	I1213 10:49:10.274715  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:10.275018  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:10.275073  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:10.774631  390588 type.go:168] "Request Body" body=""
	I1213 10:49:10.774708  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:10.775082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:11.273712  390588 type.go:168] "Request Body" body=""
	I1213 10:49:11.273785  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:11.274118  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:11.773811  390588 type.go:168] "Request Body" body=""
	I1213 10:49:11.773881  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:11.774141  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:12.273785  390588 type.go:168] "Request Body" body=""
	I1213 10:49:12.273860  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:12.274192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:12.773779  390588 type.go:168] "Request Body" body=""
	I1213 10:49:12.773865  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:12.774208  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:12.774264  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:13.274414  390588 type.go:168] "Request Body" body=""
	I1213 10:49:13.274491  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:13.274806  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:13.774595  390588 type.go:168] "Request Body" body=""
	I1213 10:49:13.774673  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:13.775019  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:14.274700  390588 type.go:168] "Request Body" body=""
	I1213 10:49:14.274776  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:14.275122  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:14.773666  390588 type.go:168] "Request Body" body=""
	I1213 10:49:14.773732  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:14.773982  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:15.273683  390588 type.go:168] "Request Body" body=""
	I1213 10:49:15.273760  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:15.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:15.274153  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:15.773812  390588 type.go:168] "Request Body" body=""
	I1213 10:49:15.773895  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:15.774230  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:16.273920  390588 type.go:168] "Request Body" body=""
	I1213 10:49:16.273995  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:16.274253  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:16.773782  390588 type.go:168] "Request Body" body=""
	I1213 10:49:16.773868  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:16.774406  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:17.274090  390588 type.go:168] "Request Body" body=""
	I1213 10:49:17.274171  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:17.274528  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:17.274584  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:17.774247  390588 type.go:168] "Request Body" body=""
	I1213 10:49:17.774320  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:17.774585  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:18.274376  390588 type.go:168] "Request Body" body=""
	I1213 10:49:18.274452  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:18.274800  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:18.774498  390588 type.go:168] "Request Body" body=""
	I1213 10:49:18.774575  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:18.774922  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:19.274279  390588 type.go:168] "Request Body" body=""
	I1213 10:49:19.274351  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:19.274659  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:19.274729  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:19.774509  390588 type.go:168] "Request Body" body=""
	I1213 10:49:19.774592  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:19.774934  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:20.273655  390588 type.go:168] "Request Body" body=""
	I1213 10:49:20.273729  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:20.274058  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:20.773657  390588 type.go:168] "Request Body" body=""
	I1213 10:49:20.773723  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:20.773970  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:21.273725  390588 type.go:168] "Request Body" body=""
	I1213 10:49:21.273834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:21.274179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:21.773895  390588 type.go:168] "Request Body" body=""
	W1213 10:49:21.773963  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
	I1213 10:49:21.773982  390588 node_ready.go:38] duration metric: took 6m0.000438977s for node "functional-407525" to be "Ready" ...
	I1213 10:49:21.777070  390588 out.go:203] 
	W1213 10:49:21.779923  390588 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:49:21.779945  390588 out.go:285] * 
	W1213 10:49:21.782066  390588 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:49:21.784854  390588 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.123701412Z" level=info msg="Using the internal default seccomp profile"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.123709592Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.123715558Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.123721006Z" level=info msg="RDT not available in the host system"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.123734454Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.124544973Z" level=info msg="Conmon does support the --sync option"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.124572083Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.124588945Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.125247847Z" level=info msg="Conmon does support the --sync option"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.125273513Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.125409662Z" level=info msg="Updated default CNI network name to "
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.125957779Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.126329836Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.126386468Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.176496877Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.176536107Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.176582655Z" level=info msg="Create NRI interface"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.176685655Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.17669427Z" level=info msg="runtime interface created"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.176705109Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.17671146Z" level=info msg="runtime interface starting up..."
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.17671745Z" level=info msg="starting plugins..."
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.176730611Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.176801118Z" level=info msg="No systemd watchdog enabled"
	Dec 13 10:43:19 functional-407525 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:49:23.776059    8556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:23.776757    8556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:23.778391    8556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:23.778964    8556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:23.780686    8556 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	[Dec13 10:24] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:25] overlayfs: idmapped layers are currently not supported
	[  +0.081323] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:31] overlayfs: idmapped layers are currently not supported
	[Dec13 10:32] overlayfs: idmapped layers are currently not supported
	[Dec13 10:42] hrtimer: interrupt took 21684953 ns
	
	
	==> kernel <==
	 10:49:23 up  2:31,  0 user,  load average: 0.18, 0.25, 0.71
	Linux functional-407525 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:49:21 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:49:22 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1136.
	Dec 13 10:49:22 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:22 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:22 functional-407525 kubelet[8449]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:22 functional-407525 kubelet[8449]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:22 functional-407525 kubelet[8449]: E1213 10:49:22.341940    8449 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:49:22 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:49:22 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:49:23 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1137.
	Dec 13 10:49:23 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:23 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:23 functional-407525 kubelet[8470]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:23 functional-407525 kubelet[8470]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:23 functional-407525 kubelet[8470]: E1213 10:49:23.108494    8470 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:49:23 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:49:23 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:49:23 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1138.
	Dec 13 10:49:23 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:23 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:23 functional-407525 kubelet[8560]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:23 functional-407525 kubelet[8560]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:23 functional-407525 kubelet[8560]: E1213 10:49:23.826906    8560 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:49:23 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:49:23 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525: exit status 2 (361.52656ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-407525" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-407525 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-407525 get po -A: exit status 1 (61.104181ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-407525 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-407525 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-407525 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-407525
helpers_test.go:244: (dbg) docker inspect functional-407525:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	        "Created": "2025-12-13T10:34:59.162458661Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 385126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:34:59.230276401Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hostname",
	        "HostsPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hosts",
	        "LogPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7-json.log",
	        "Name": "/functional-407525",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-407525:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-407525",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	                "LowerDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-407525",
	                "Source": "/var/lib/docker/volumes/functional-407525/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-407525",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-407525",
	                "name.minikube.sigs.k8s.io": "functional-407525",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb8c72e3de62f4751cebe2c5a489ec3040a7f771c4c912b4414d5eb26c67d8e4",
	            "SandboxKey": "/var/run/docker/netns/fb8c72e3de62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-407525": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:c5:1d:c8:5d:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8bb3fce07852261971da0e26f4e28c90471b6da820443a0b657c0bf09d2f7042",
	                    "EndpointID": "3a907b06ccc449fc18f0cf71710374046514d7011757e3e81bb1c73b267fe8c9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-407525",
	                        "7fc3d6bd328a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525: exit status 2 (307.275011ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-407525 logs -n 25: (1.029994418s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-371413 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ ssh            │ functional-371413 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ ssh            │ functional-371413 ssh -- ls -la /mount-9p                                                                                                         │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ ssh            │ functional-371413 ssh sudo umount -f /mount-9p                                                                                                    │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ mount          │ -p functional-371413 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3564307197/001:/mount2 --alsologtostderr -v=1                                │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ mount          │ -p functional-371413 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3564307197/001:/mount1 --alsologtostderr -v=1                                │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ mount          │ -p functional-371413 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3564307197/001:/mount3 --alsologtostderr -v=1                                │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ ssh            │ functional-371413 ssh findmnt -T /mount1                                                                                                          │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ ssh            │ functional-371413 ssh findmnt -T /mount1                                                                                                          │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ ssh            │ functional-371413 ssh findmnt -T /mount2                                                                                                          │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ ssh            │ functional-371413 ssh findmnt -T /mount3                                                                                                          │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ mount          │ -p functional-371413 --kill=true                                                                                                                  │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ update-context │ functional-371413 update-context --alsologtostderr -v=2                                                                                           │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ update-context │ functional-371413 update-context --alsologtostderr -v=2                                                                                           │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ update-context │ functional-371413 update-context --alsologtostderr -v=2                                                                                           │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image          │ functional-371413 image ls --format short --alsologtostderr                                                                                       │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image          │ functional-371413 image ls --format yaml --alsologtostderr                                                                                        │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ ssh            │ functional-371413 ssh pgrep buildkitd                                                                                                             │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ image          │ functional-371413 image build -t localhost/my-image:functional-371413 testdata/build --alsologtostderr                                            │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image          │ functional-371413 image ls                                                                                                                        │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image          │ functional-371413 image ls --format json --alsologtostderr                                                                                        │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image          │ functional-371413 image ls --format table --alsologtostderr                                                                                       │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ delete         │ -p functional-371413                                                                                                                              │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ start          │ -p functional-407525 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ start          │ -p functional-407525 --alsologtostderr -v=8                                                                                                       │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:43 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:43:16
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:43:16.189245  390588 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:43:16.189385  390588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:43:16.189397  390588 out.go:374] Setting ErrFile to fd 2...
	I1213 10:43:16.189403  390588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:43:16.189684  390588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:43:16.190095  390588 out.go:368] Setting JSON to false
	I1213 10:43:16.190986  390588 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8749,"bootTime":1765613848,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:43:16.191060  390588 start.go:143] virtualization:  
	I1213 10:43:16.194511  390588 out.go:179] * [functional-407525] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:43:16.198204  390588 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:43:16.198321  390588 notify.go:221] Checking for updates...
	I1213 10:43:16.204163  390588 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:43:16.207088  390588 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:16.209934  390588 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 10:43:16.212863  390588 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:43:16.215711  390588 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:43:16.219166  390588 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:43:16.219330  390588 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:43:16.245531  390588 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:43:16.245660  390588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:43:16.304777  390588 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:43:16.295770012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:43:16.304888  390588 docker.go:319] overlay module found
	I1213 10:43:16.309644  390588 out.go:179] * Using the docker driver based on existing profile
	I1213 10:43:16.312430  390588 start.go:309] selected driver: docker
	I1213 10:43:16.312447  390588 start.go:927] validating driver "docker" against &{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:43:16.312556  390588 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:43:16.312654  390588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:43:16.369591  390588 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:43:16.360947105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:43:16.370024  390588 cni.go:84] Creating CNI manager for ""
	I1213 10:43:16.370077  390588 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:43:16.370130  390588 start.go:353] cluster config:
	{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:43:16.374951  390588 out.go:179] * Starting "functional-407525" primary control-plane node in "functional-407525" cluster
	I1213 10:43:16.377750  390588 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:43:16.380575  390588 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:43:16.383625  390588 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:43:16.383675  390588 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:43:16.383684  390588 cache.go:65] Caching tarball of preloaded images
	I1213 10:43:16.383721  390588 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:43:16.383768  390588 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:43:16.383779  390588 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 10:43:16.383909  390588 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/config.json ...
	I1213 10:43:16.402414  390588 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:43:16.402437  390588 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:43:16.402458  390588 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:43:16.402490  390588 start.go:360] acquireMachinesLock for functional-407525: {Name:mkb9a6ddeb0e93e626919e03dc3c989f045e07da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:43:16.402563  390588 start.go:364] duration metric: took 38.359µs to acquireMachinesLock for "functional-407525"
	I1213 10:43:16.402589  390588 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:43:16.402599  390588 fix.go:54] fixHost starting: 
	I1213 10:43:16.402860  390588 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:43:16.419664  390588 fix.go:112] recreateIfNeeded on functional-407525: state=Running err=<nil>
	W1213 10:43:16.419692  390588 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:43:16.423019  390588 out.go:252] * Updating the running docker "functional-407525" container ...
	I1213 10:43:16.423065  390588 machine.go:94] provisionDockerMachine start ...
	I1213 10:43:16.423166  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:16.440791  390588 main.go:143] libmachine: Using SSH client type: native
	I1213 10:43:16.441132  390588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:43:16.441147  390588 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:43:16.590928  390588 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-407525
	
	I1213 10:43:16.590952  390588 ubuntu.go:182] provisioning hostname "functional-407525"
	I1213 10:43:16.591012  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:16.608907  390588 main.go:143] libmachine: Using SSH client type: native
	I1213 10:43:16.609223  390588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:43:16.609243  390588 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-407525 && echo "functional-407525" | sudo tee /etc/hostname
	I1213 10:43:16.770512  390588 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-407525
	
	I1213 10:43:16.770629  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:16.791074  390588 main.go:143] libmachine: Using SSH client type: native
	I1213 10:43:16.791392  390588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:43:16.791418  390588 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-407525' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-407525/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-407525' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:43:16.939938  390588 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:43:16.939965  390588 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 10:43:16.940042  390588 ubuntu.go:190] setting up certificates
	I1213 10:43:16.940060  390588 provision.go:84] configureAuth start
	I1213 10:43:16.940146  390588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-407525
	I1213 10:43:16.959231  390588 provision.go:143] copyHostCerts
	I1213 10:43:16.959277  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 10:43:16.959321  390588 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 10:43:16.959334  390588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 10:43:16.959423  390588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 10:43:16.959550  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 10:43:16.959579  390588 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 10:43:16.959590  390588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 10:43:16.959624  390588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 10:43:16.959682  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 10:43:16.959708  390588 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 10:43:16.959712  390588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 10:43:16.959738  390588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 10:43:16.959842  390588 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.functional-407525 san=[127.0.0.1 192.168.49.2 functional-407525 localhost minikube]
	I1213 10:43:17.067458  390588 provision.go:177] copyRemoteCerts
	I1213 10:43:17.067620  390588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:43:17.067673  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.087609  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:17.191151  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 10:43:17.191266  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:43:17.208031  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 10:43:17.208139  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:43:17.224829  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 10:43:17.224888  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:43:17.242075  390588 provision.go:87] duration metric: took 301.967659ms to configureAuth
	I1213 10:43:17.242106  390588 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:43:17.242287  390588 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:43:17.242396  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.259726  390588 main.go:143] libmachine: Using SSH client type: native
	I1213 10:43:17.260059  390588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:43:17.260089  390588 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:43:17.589136  390588 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:43:17.589164  390588 machine.go:97] duration metric: took 1.166089785s to provisionDockerMachine
	I1213 10:43:17.589176  390588 start.go:293] postStartSetup for "functional-407525" (driver="docker")
	I1213 10:43:17.589189  390588 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:43:17.589251  390588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:43:17.589299  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.609214  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:17.715839  390588 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:43:17.719089  390588 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 10:43:17.719109  390588 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 10:43:17.719114  390588 command_runner.go:130] > VERSION_ID="12"
	I1213 10:43:17.719118  390588 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 10:43:17.719124  390588 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 10:43:17.719128  390588 command_runner.go:130] > ID=debian
	I1213 10:43:17.719139  390588 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 10:43:17.719147  390588 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 10:43:17.719152  390588 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 10:43:17.719195  390588 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:43:17.719216  390588 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:43:17.719233  390588 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 10:43:17.719286  390588 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 10:43:17.719370  390588 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 10:43:17.719381  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> /etc/ssl/certs/3563282.pem
	I1213 10:43:17.719455  390588 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts -> hosts in /etc/test/nested/copy/356328
	I1213 10:43:17.719463  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts -> /etc/test/nested/copy/356328/hosts
	I1213 10:43:17.719505  390588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/356328
	I1213 10:43:17.727090  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 10:43:17.744131  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts --> /etc/test/nested/copy/356328/hosts (40 bytes)
	I1213 10:43:17.760861  390588 start.go:296] duration metric: took 171.654498ms for postStartSetup
	I1213 10:43:17.760950  390588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:43:17.760996  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.777913  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:17.880295  390588 command_runner.go:130] > 14%
	I1213 10:43:17.880360  390588 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:43:17.884436  390588 command_runner.go:130] > 169G
	I1213 10:43:17.884867  390588 fix.go:56] duration metric: took 1.482264041s for fixHost
	I1213 10:43:17.884887  390588 start.go:83] releasing machines lock for "functional-407525", held for 1.482310261s
	I1213 10:43:17.884953  390588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-407525
	I1213 10:43:17.902293  390588 ssh_runner.go:195] Run: cat /version.json
	I1213 10:43:17.902324  390588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:43:17.902343  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.902383  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.922251  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:17.922884  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:18.027684  390588 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 10:43:18.027820  390588 ssh_runner.go:195] Run: systemctl --version
	I1213 10:43:18.121469  390588 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 10:43:18.124198  390588 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 10:43:18.124239  390588 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 10:43:18.124329  390588 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:43:18.162710  390588 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 10:43:18.167030  390588 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 10:43:18.167242  390588 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:43:18.167335  390588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:43:18.175207  390588 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:43:18.175230  390588 start.go:496] detecting cgroup driver to use...
	I1213 10:43:18.175264  390588 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:43:18.175320  390588 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:43:18.190633  390588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:43:18.203672  390588 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:43:18.203747  390588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:43:18.219163  390588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:43:18.232309  390588 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:43:18.357889  390588 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:43:18.493929  390588 docker.go:234] disabling docker service ...
	I1213 10:43:18.494052  390588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:43:18.509796  390588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:43:18.523416  390588 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:43:18.655317  390588 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:43:18.778247  390588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:43:18.791182  390588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:43:18.805083  390588 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1213 10:43:18.806588  390588 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:43:18.806679  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.815701  390588 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:43:18.815803  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.824913  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.834321  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.843170  390588 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:43:18.851373  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.860701  390588 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.869075  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.877860  390588 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:43:18.884514  390588 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 10:43:18.885462  390588 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:43:18.893210  390588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:43:19.009167  390588 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:43:19.185094  390588 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:43:19.185195  390588 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:43:19.189492  390588 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1213 10:43:19.189518  390588 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 10:43:19.189526  390588 command_runner.go:130] > Device: 0,72	Inode: 1638        Links: 1
	I1213 10:43:19.189541  390588 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:43:19.189566  390588 command_runner.go:130] > Access: 2025-12-13 10:43:19.120971949 +0000
	I1213 10:43:19.189581  390588 command_runner.go:130] > Modify: 2025-12-13 10:43:19.120971949 +0000
	I1213 10:43:19.189586  390588 command_runner.go:130] > Change: 2025-12-13 10:43:19.120971949 +0000
	I1213 10:43:19.189590  390588 command_runner.go:130] >  Birth: -
	I1213 10:43:19.190244  390588 start.go:564] Will wait 60s for crictl version
	I1213 10:43:19.190335  390588 ssh_runner.go:195] Run: which crictl
	I1213 10:43:19.193561  390588 command_runner.go:130] > /usr/local/bin/crictl
	I1213 10:43:19.194286  390588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:43:19.222711  390588 command_runner.go:130] > Version:  0.1.0
	I1213 10:43:19.222747  390588 command_runner.go:130] > RuntimeName:  cri-o
	I1213 10:43:19.222752  390588 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1213 10:43:19.222773  390588 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 10:43:19.225058  390588 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:43:19.225194  390588 ssh_runner.go:195] Run: crio --version
	I1213 10:43:19.255970  390588 command_runner.go:130] > crio version 1.34.3
	I1213 10:43:19.256013  390588 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 10:43:19.256019  390588 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 10:43:19.256025  390588 command_runner.go:130] >    GitTreeState:   dirty
	I1213 10:43:19.256044  390588 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 10:43:19.256051  390588 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 10:43:19.256078  390588 command_runner.go:130] >    Compiler:       gc
	I1213 10:43:19.256090  390588 command_runner.go:130] >    Platform:       linux/arm64
	I1213 10:43:19.256094  390588 command_runner.go:130] >    Linkmode:       static
	I1213 10:43:19.256098  390588 command_runner.go:130] >    BuildTags:
	I1213 10:43:19.256105  390588 command_runner.go:130] >      static
	I1213 10:43:19.256109  390588 command_runner.go:130] >      netgo
	I1213 10:43:19.256113  390588 command_runner.go:130] >      osusergo
	I1213 10:43:19.256117  390588 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 10:43:19.256123  390588 command_runner.go:130] >      seccomp
	I1213 10:43:19.256128  390588 command_runner.go:130] >      apparmor
	I1213 10:43:19.256131  390588 command_runner.go:130] >      selinux
	I1213 10:43:19.256136  390588 command_runner.go:130] >    LDFlags:          unknown
	I1213 10:43:19.256166  390588 command_runner.go:130] >    SeccompEnabled:   true
	I1213 10:43:19.256195  390588 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 10:43:19.258161  390588 ssh_runner.go:195] Run: crio --version
	I1213 10:43:19.285922  390588 command_runner.go:130] > crio version 1.34.3
	I1213 10:43:19.285950  390588 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 10:43:19.285964  390588 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 10:43:19.285970  390588 command_runner.go:130] >    GitTreeState:   dirty
	I1213 10:43:19.285975  390588 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 10:43:19.285999  390588 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 10:43:19.286010  390588 command_runner.go:130] >    Compiler:       gc
	I1213 10:43:19.286017  390588 command_runner.go:130] >    Platform:       linux/arm64
	I1213 10:43:19.286022  390588 command_runner.go:130] >    Linkmode:       static
	I1213 10:43:19.286028  390588 command_runner.go:130] >    BuildTags:
	I1213 10:43:19.286046  390588 command_runner.go:130] >      static
	I1213 10:43:19.286056  390588 command_runner.go:130] >      netgo
	I1213 10:43:19.286061  390588 command_runner.go:130] >      osusergo
	I1213 10:43:19.286075  390588 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 10:43:19.286093  390588 command_runner.go:130] >      seccomp
	I1213 10:43:19.286102  390588 command_runner.go:130] >      apparmor
	I1213 10:43:19.286108  390588 command_runner.go:130] >      selinux
	I1213 10:43:19.286132  390588 command_runner.go:130] >    LDFlags:          unknown
	I1213 10:43:19.286137  390588 command_runner.go:130] >    SeccompEnabled:   true
	I1213 10:43:19.286153  390588 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 10:43:19.291101  390588 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 10:43:19.293929  390588 cli_runner.go:164] Run: docker network inspect functional-407525 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:43:19.310541  390588 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:43:19.314437  390588 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 10:43:19.314776  390588 kubeadm.go:884] updating cluster {Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:43:19.314904  390588 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:43:19.314962  390588 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:43:19.346332  390588 command_runner.go:130] > {
	I1213 10:43:19.346357  390588 command_runner.go:130] >   "images":  [
	I1213 10:43:19.346361  390588 command_runner.go:130] >     {
	I1213 10:43:19.346369  390588 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:43:19.346374  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346380  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:43:19.346383  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346387  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346396  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 10:43:19.346404  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 10:43:19.346411  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346416  390588 command_runner.go:130] >       "size":  "111333938",
	I1213 10:43:19.346423  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346429  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346436  390588 command_runner.go:130] >     },
	I1213 10:43:19.346439  390588 command_runner.go:130] >     {
	I1213 10:43:19.346445  390588 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:43:19.346449  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346457  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:43:19.346467  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346472  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346480  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 10:43:19.346491  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:43:19.346494  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346508  390588 command_runner.go:130] >       "size":  "29037500",
	I1213 10:43:19.346518  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346525  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346531  390588 command_runner.go:130] >     },
	I1213 10:43:19.346535  390588 command_runner.go:130] >     {
	I1213 10:43:19.346541  390588 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:43:19.346548  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346553  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:43:19.346556  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346563  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346571  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 10:43:19.346582  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 10:43:19.346586  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346590  390588 command_runner.go:130] >       "size":  "74491780",
	I1213 10:43:19.346594  390588 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:43:19.346600  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346604  390588 command_runner.go:130] >     },
	I1213 10:43:19.346610  390588 command_runner.go:130] >     {
	I1213 10:43:19.346616  390588 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:43:19.346621  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346628  390588 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:43:19.346632  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346636  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346646  390588 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 10:43:19.346657  390588 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 10:43:19.346661  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346667  390588 command_runner.go:130] >       "size":  "60857170",
	I1213 10:43:19.346671  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.346675  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.346679  390588 command_runner.go:130] >       },
	I1213 10:43:19.346690  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346698  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346702  390588 command_runner.go:130] >     },
	I1213 10:43:19.346705  390588 command_runner.go:130] >     {
	I1213 10:43:19.346715  390588 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:43:19.346722  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346728  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:43:19.346731  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346736  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346745  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 10:43:19.346760  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 10:43:19.346764  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346768  390588 command_runner.go:130] >       "size":  "84949999",
	I1213 10:43:19.346775  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.346778  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.346782  390588 command_runner.go:130] >       },
	I1213 10:43:19.346786  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346796  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346799  390588 command_runner.go:130] >     },
	I1213 10:43:19.346802  390588 command_runner.go:130] >     {
	I1213 10:43:19.346811  390588 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:43:19.346818  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346824  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:43:19.346828  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346832  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346842  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 10:43:19.346851  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 10:43:19.346859  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346863  390588 command_runner.go:130] >       "size":  "72170325",
	I1213 10:43:19.346866  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.346870  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.346875  390588 command_runner.go:130] >       },
	I1213 10:43:19.346879  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346886  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346889  390588 command_runner.go:130] >     },
	I1213 10:43:19.346892  390588 command_runner.go:130] >     {
	I1213 10:43:19.346898  390588 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:43:19.346911  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346917  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:43:19.346923  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346927  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346934  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 10:43:19.346946  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:43:19.346950  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346954  390588 command_runner.go:130] >       "size":  "74106775",
	I1213 10:43:19.346958  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346964  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346967  390588 command_runner.go:130] >     },
	I1213 10:43:19.346970  390588 command_runner.go:130] >     {
	I1213 10:43:19.346977  390588 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:43:19.346984  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346990  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:43:19.346993  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346997  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.347007  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 10:43:19.347027  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 10:43:19.347034  390588 command_runner.go:130] >       ],
	I1213 10:43:19.347038  390588 command_runner.go:130] >       "size":  "49822549",
	I1213 10:43:19.347041  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.347045  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.347048  390588 command_runner.go:130] >       },
	I1213 10:43:19.347053  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.347058  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.347062  390588 command_runner.go:130] >     },
	I1213 10:43:19.347065  390588 command_runner.go:130] >     {
	I1213 10:43:19.347072  390588 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:43:19.347078  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.347083  390588 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:43:19.347087  390588 command_runner.go:130] >       ],
	I1213 10:43:19.347097  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.347109  390588 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 10:43:19.347120  390588 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 10:43:19.347124  390588 command_runner.go:130] >       ],
	I1213 10:43:19.347132  390588 command_runner.go:130] >       "size":  "519884",
	I1213 10:43:19.347135  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.347140  390588 command_runner.go:130] >         "value":  "65535"
	I1213 10:43:19.347145  390588 command_runner.go:130] >       },
	I1213 10:43:19.347149  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.347155  390588 command_runner.go:130] >       "pinned":  true
	I1213 10:43:19.347158  390588 command_runner.go:130] >     }
	I1213 10:43:19.347161  390588 command_runner.go:130] >   ]
	I1213 10:43:19.347164  390588 command_runner.go:130] > }
	I1213 10:43:19.347379  390588 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:43:19.347391  390588 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:43:19.347452  390588 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:43:19.372755  390588 command_runner.go:130] > {
	I1213 10:43:19.372774  390588 command_runner.go:130] >   "images":  [
	I1213 10:43:19.372779  390588 command_runner.go:130] >     {
	I1213 10:43:19.372788  390588 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:43:19.372792  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.372799  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:43:19.372803  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372807  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.372816  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 10:43:19.372824  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 10:43:19.372828  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372832  390588 command_runner.go:130] >       "size":  "111333938",
	I1213 10:43:19.372836  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.372851  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.372854  390588 command_runner.go:130] >     },
	I1213 10:43:19.372857  390588 command_runner.go:130] >     {
	I1213 10:43:19.372863  390588 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:43:19.372868  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.372873  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:43:19.372876  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372880  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.372889  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 10:43:19.372897  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:43:19.372900  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372904  390588 command_runner.go:130] >       "size":  "29037500",
	I1213 10:43:19.372908  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.372920  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.372924  390588 command_runner.go:130] >     },
	I1213 10:43:19.372927  390588 command_runner.go:130] >     {
	I1213 10:43:19.372934  390588 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:43:19.372938  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.372943  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:43:19.372947  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372950  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.372958  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 10:43:19.372966  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 10:43:19.372970  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372973  390588 command_runner.go:130] >       "size":  "74491780",
	I1213 10:43:19.372978  390588 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:43:19.372982  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.372985  390588 command_runner.go:130] >     },
	I1213 10:43:19.372988  390588 command_runner.go:130] >     {
	I1213 10:43:19.372994  390588 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:43:19.372998  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373002  390588 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:43:19.373007  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373011  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373018  390588 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 10:43:19.373025  390588 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 10:43:19.373029  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373033  390588 command_runner.go:130] >       "size":  "60857170",
	I1213 10:43:19.373036  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373040  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.373043  390588 command_runner.go:130] >       },
	I1213 10:43:19.373052  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373056  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373059  390588 command_runner.go:130] >     },
	I1213 10:43:19.373062  390588 command_runner.go:130] >     {
	I1213 10:43:19.373070  390588 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:43:19.373078  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373083  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:43:19.373087  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373090  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373098  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 10:43:19.373110  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 10:43:19.373114  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373118  390588 command_runner.go:130] >       "size":  "84949999",
	I1213 10:43:19.373122  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373126  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.373129  390588 command_runner.go:130] >       },
	I1213 10:43:19.373132  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373136  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373139  390588 command_runner.go:130] >     },
	I1213 10:43:19.373142  390588 command_runner.go:130] >     {
	I1213 10:43:19.373148  390588 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:43:19.373151  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373157  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:43:19.373161  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373164  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373172  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 10:43:19.373181  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 10:43:19.373184  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373188  390588 command_runner.go:130] >       "size":  "72170325",
	I1213 10:43:19.373191  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373195  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.373198  390588 command_runner.go:130] >       },
	I1213 10:43:19.373202  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373206  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373208  390588 command_runner.go:130] >     },
	I1213 10:43:19.373211  390588 command_runner.go:130] >     {
	I1213 10:43:19.373218  390588 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:43:19.373222  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373230  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:43:19.373234  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373238  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373246  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 10:43:19.373253  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:43:19.373256  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373260  390588 command_runner.go:130] >       "size":  "74106775",
	I1213 10:43:19.373263  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373267  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373270  390588 command_runner.go:130] >     },
	I1213 10:43:19.373273  390588 command_runner.go:130] >     {
	I1213 10:43:19.373279  390588 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:43:19.373283  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373288  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:43:19.373291  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373295  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373303  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 10:43:19.373321  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 10:43:19.373324  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373328  390588 command_runner.go:130] >       "size":  "49822549",
	I1213 10:43:19.373331  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373336  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.373339  390588 command_runner.go:130] >       },
	I1213 10:43:19.373343  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373346  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373349  390588 command_runner.go:130] >     },
	I1213 10:43:19.373352  390588 command_runner.go:130] >     {
	I1213 10:43:19.373359  390588 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:43:19.373362  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373367  390588 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:43:19.373372  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373376  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373383  390588 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 10:43:19.373394  390588 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 10:43:19.373398  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373402  390588 command_runner.go:130] >       "size":  "519884",
	I1213 10:43:19.373405  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373409  390588 command_runner.go:130] >         "value":  "65535"
	I1213 10:43:19.373412  390588 command_runner.go:130] >       },
	I1213 10:43:19.373419  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373422  390588 command_runner.go:130] >       "pinned":  true
	I1213 10:43:19.373426  390588 command_runner.go:130] >     }
	I1213 10:43:19.373428  390588 command_runner.go:130] >   ]
	I1213 10:43:19.373432  390588 command_runner.go:130] > }
	I1213 10:43:19.375861  390588 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:43:19.375885  390588 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:43:19.375894  390588 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 10:43:19.375988  390588 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-407525 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:43:19.376071  390588 ssh_runner.go:195] Run: crio config
	I1213 10:43:19.425743  390588 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1213 10:43:19.425768  390588 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1213 10:43:19.425775  390588 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1213 10:43:19.425779  390588 command_runner.go:130] > #
	I1213 10:43:19.425787  390588 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1213 10:43:19.425793  390588 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1213 10:43:19.425801  390588 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1213 10:43:19.425810  390588 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1213 10:43:19.425814  390588 command_runner.go:130] > # reload'.
	I1213 10:43:19.425821  390588 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1213 10:43:19.425828  390588 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1213 10:43:19.425838  390588 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1213 10:43:19.425844  390588 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1213 10:43:19.425847  390588 command_runner.go:130] > [crio]
	I1213 10:43:19.425854  390588 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1213 10:43:19.425862  390588 command_runner.go:130] > # containers images, in this directory.
	I1213 10:43:19.426591  390588 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1213 10:43:19.426608  390588 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1213 10:43:19.427294  390588 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1213 10:43:19.427313  390588 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1213 10:43:19.427819  390588 command_runner.go:130] > # imagestore = ""
	I1213 10:43:19.427842  390588 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1213 10:43:19.427850  390588 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1213 10:43:19.428482  390588 command_runner.go:130] > # storage_driver = "overlay"
	I1213 10:43:19.428503  390588 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1213 10:43:19.428511  390588 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1213 10:43:19.428824  390588 command_runner.go:130] > # storage_option = [
	I1213 10:43:19.429159  390588 command_runner.go:130] > # ]
	I1213 10:43:19.429181  390588 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1213 10:43:19.429189  390588 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1213 10:43:19.429811  390588 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1213 10:43:19.429832  390588 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1213 10:43:19.429847  390588 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1213 10:43:19.429857  390588 command_runner.go:130] > # always happen on a node reboot
	I1213 10:43:19.430483  390588 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1213 10:43:19.430528  390588 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1213 10:43:19.430541  390588 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1213 10:43:19.430547  390588 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1213 10:43:19.431051  390588 command_runner.go:130] > # version_file_persist = ""
	I1213 10:43:19.431076  390588 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1213 10:43:19.431086  390588 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1213 10:43:19.431716  390588 command_runner.go:130] > # internal_wipe = true
	I1213 10:43:19.431739  390588 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1213 10:43:19.431747  390588 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1213 10:43:19.432440  390588 command_runner.go:130] > # internal_repair = true
	I1213 10:43:19.432456  390588 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1213 10:43:19.432463  390588 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1213 10:43:19.432469  390588 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1213 10:43:19.432478  390588 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1213 10:43:19.432487  390588 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1213 10:43:19.432491  390588 command_runner.go:130] > [crio.api]
	I1213 10:43:19.432496  390588 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1213 10:43:19.432503  390588 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1213 10:43:19.432512  390588 command_runner.go:130] > # IP address on which the stream server will listen.
	I1213 10:43:19.432517  390588 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1213 10:43:19.432544  390588 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1213 10:43:19.432552  390588 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1213 10:43:19.432851  390588 command_runner.go:130] > # stream_port = "0"
	I1213 10:43:19.432867  390588 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1213 10:43:19.432873  390588 command_runner.go:130] > # stream_enable_tls = false
	I1213 10:43:19.432879  390588 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1213 10:43:19.432886  390588 command_runner.go:130] > # stream_idle_timeout = ""
	I1213 10:43:19.432897  390588 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1213 10:43:19.432906  390588 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1213 10:43:19.433090  390588 command_runner.go:130] > # stream_tls_cert = ""
	I1213 10:43:19.433111  390588 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1213 10:43:19.433117  390588 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1213 10:43:19.433335  390588 command_runner.go:130] > # stream_tls_key = ""
	I1213 10:43:19.433354  390588 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1213 10:43:19.433362  390588 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1213 10:43:19.433373  390588 command_runner.go:130] > # automatically pick up the changes.
	I1213 10:43:19.433389  390588 command_runner.go:130] > # stream_tls_ca = ""
	I1213 10:43:19.433408  390588 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 10:43:19.433419  390588 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1213 10:43:19.433428  390588 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 10:43:19.433678  390588 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1213 10:43:19.433694  390588 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1213 10:43:19.433701  390588 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1213 10:43:19.433705  390588 command_runner.go:130] > [crio.runtime]
	I1213 10:43:19.433711  390588 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1213 10:43:19.433719  390588 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1213 10:43:19.433726  390588 command_runner.go:130] > # "nofile=1024:2048"
	I1213 10:43:19.433733  390588 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1213 10:43:19.433737  390588 command_runner.go:130] > # default_ulimits = [
	I1213 10:43:19.433744  390588 command_runner.go:130] > # ]
	I1213 10:43:19.433751  390588 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1213 10:43:19.433758  390588 command_runner.go:130] > # no_pivot = false
	I1213 10:43:19.433764  390588 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1213 10:43:19.433771  390588 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1213 10:43:19.433778  390588 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1213 10:43:19.433785  390588 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1213 10:43:19.433790  390588 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1213 10:43:19.433797  390588 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 10:43:19.433949  390588 command_runner.go:130] > # conmon = ""
	I1213 10:43:19.433968  390588 command_runner.go:130] > # Cgroup setting for conmon
	I1213 10:43:19.433978  390588 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1213 10:43:19.434402  390588 command_runner.go:130] > conmon_cgroup = "pod"
	I1213 10:43:19.434425  390588 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1213 10:43:19.434435  390588 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1213 10:43:19.434446  390588 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 10:43:19.434453  390588 command_runner.go:130] > # conmon_env = [
	I1213 10:43:19.434466  390588 command_runner.go:130] > # ]
	I1213 10:43:19.434472  390588 command_runner.go:130] > # Additional environment variables to set for all the
	I1213 10:43:19.434478  390588 command_runner.go:130] > # containers. These are overridden if set in the
	I1213 10:43:19.434484  390588 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1213 10:43:19.434488  390588 command_runner.go:130] > # default_env = [
	I1213 10:43:19.434491  390588 command_runner.go:130] > # ]
	I1213 10:43:19.434497  390588 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1213 10:43:19.434515  390588 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1213 10:43:19.434525  390588 command_runner.go:130] > # selinux = false
	I1213 10:43:19.434535  390588 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1213 10:43:19.434543  390588 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1213 10:43:19.434555  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.434559  390588 command_runner.go:130] > # seccomp_profile = ""
	I1213 10:43:19.434565  390588 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1213 10:43:19.434570  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.434841  390588 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1213 10:43:19.434858  390588 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1213 10:43:19.434865  390588 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1213 10:43:19.434872  390588 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1213 10:43:19.434885  390588 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1213 10:43:19.434891  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.434896  390588 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1213 10:43:19.434902  390588 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1213 10:43:19.434908  390588 command_runner.go:130] > # the cgroup blockio controller.
	I1213 10:43:19.434913  390588 command_runner.go:130] > # blockio_config_file = ""
	I1213 10:43:19.434937  390588 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1213 10:43:19.434946  390588 command_runner.go:130] > # blockio parameters.
	I1213 10:43:19.434950  390588 command_runner.go:130] > # blockio_reload = false
	I1213 10:43:19.434957  390588 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1213 10:43:19.434961  390588 command_runner.go:130] > # irqbalance daemon.
	I1213 10:43:19.434966  390588 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1213 10:43:19.434972  390588 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1213 10:43:19.434982  390588 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1213 10:43:19.434992  390588 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1213 10:43:19.435365  390588 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1213 10:43:19.435381  390588 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1213 10:43:19.435387  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.435392  390588 command_runner.go:130] > # rdt_config_file = ""
	I1213 10:43:19.435398  390588 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1213 10:43:19.435404  390588 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1213 10:43:19.435411  390588 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1213 10:43:19.435584  390588 command_runner.go:130] > # separate_pull_cgroup = ""
	I1213 10:43:19.435601  390588 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1213 10:43:19.435608  390588 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1213 10:43:19.435617  390588 command_runner.go:130] > # will be added.
	I1213 10:43:19.436649  390588 command_runner.go:130] > # default_capabilities = [
	I1213 10:43:19.436661  390588 command_runner.go:130] > # 	"CHOWN",
	I1213 10:43:19.436665  390588 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1213 10:43:19.436669  390588 command_runner.go:130] > # 	"FSETID",
	I1213 10:43:19.436673  390588 command_runner.go:130] > # 	"FOWNER",
	I1213 10:43:19.436679  390588 command_runner.go:130] > # 	"SETGID",
	I1213 10:43:19.436683  390588 command_runner.go:130] > # 	"SETUID",
	I1213 10:43:19.436708  390588 command_runner.go:130] > # 	"SETPCAP",
	I1213 10:43:19.436718  390588 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1213 10:43:19.436722  390588 command_runner.go:130] > # 	"KILL",
	I1213 10:43:19.436725  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436737  390588 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1213 10:43:19.436744  390588 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1213 10:43:19.436749  390588 command_runner.go:130] > # add_inheritable_capabilities = false
	I1213 10:43:19.436759  390588 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1213 10:43:19.436773  390588 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 10:43:19.436777  390588 command_runner.go:130] > default_sysctls = [
	I1213 10:43:19.436788  390588 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1213 10:43:19.436794  390588 command_runner.go:130] > ]
	I1213 10:43:19.436799  390588 command_runner.go:130] > # List of devices on the host that a
	I1213 10:43:19.436806  390588 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1213 10:43:19.436813  390588 command_runner.go:130] > # allowed_devices = [
	I1213 10:43:19.436817  390588 command_runner.go:130] > # 	"/dev/fuse",
	I1213 10:43:19.436820  390588 command_runner.go:130] > # 	"/dev/net/tun",
	I1213 10:43:19.436823  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436828  390588 command_runner.go:130] > # List of additional devices. specified as
	I1213 10:43:19.436836  390588 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1213 10:43:19.436842  390588 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1213 10:43:19.436850  390588 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 10:43:19.436857  390588 command_runner.go:130] > # additional_devices = [
	I1213 10:43:19.436861  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436868  390588 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1213 10:43:19.436872  390588 command_runner.go:130] > # cdi_spec_dirs = [
	I1213 10:43:19.436878  390588 command_runner.go:130] > # 	"/etc/cdi",
	I1213 10:43:19.436882  390588 command_runner.go:130] > # 	"/var/run/cdi",
	I1213 10:43:19.436888  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436895  390588 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1213 10:43:19.436904  390588 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1213 10:43:19.436908  390588 command_runner.go:130] > # Defaults to false.
	I1213 10:43:19.436913  390588 command_runner.go:130] > # device_ownership_from_security_context = false
	I1213 10:43:19.436919  390588 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1213 10:43:19.436926  390588 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1213 10:43:19.436930  390588 command_runner.go:130] > # hooks_dir = [
	I1213 10:43:19.436936  390588 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1213 10:43:19.436942  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436948  390588 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1213 10:43:19.436964  390588 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1213 10:43:19.436969  390588 command_runner.go:130] > # its default mounts from the following two files:
	I1213 10:43:19.436973  390588 command_runner.go:130] > #
	I1213 10:43:19.436981  390588 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1213 10:43:19.436992  390588 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1213 10:43:19.437001  390588 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1213 10:43:19.437008  390588 command_runner.go:130] > #
	I1213 10:43:19.437022  390588 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1213 10:43:19.437029  390588 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1213 10:43:19.437035  390588 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1213 10:43:19.437044  390588 command_runner.go:130] > #      only add mounts it finds in this file.
	I1213 10:43:19.437047  390588 command_runner.go:130] > #
	I1213 10:43:19.437051  390588 command_runner.go:130] > # default_mounts_file = ""
	I1213 10:43:19.437059  390588 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1213 10:43:19.437068  390588 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1213 10:43:19.437072  390588 command_runner.go:130] > # pids_limit = -1
	I1213 10:43:19.437078  390588 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1213 10:43:19.437087  390588 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1213 10:43:19.437094  390588 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1213 10:43:19.437104  390588 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1213 10:43:19.437110  390588 command_runner.go:130] > # log_size_max = -1
	I1213 10:43:19.437117  390588 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1213 10:43:19.437124  390588 command_runner.go:130] > # log_to_journald = false
	I1213 10:43:19.437130  390588 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1213 10:43:19.437136  390588 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1213 10:43:19.437143  390588 command_runner.go:130] > # Path to directory for container attach sockets.
	I1213 10:43:19.437149  390588 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1213 10:43:19.437160  390588 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1213 10:43:19.437164  390588 command_runner.go:130] > # bind_mount_prefix = ""
	I1213 10:43:19.437170  390588 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1213 10:43:19.437174  390588 command_runner.go:130] > # read_only = false
	I1213 10:43:19.437180  390588 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1213 10:43:19.437188  390588 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1213 10:43:19.437195  390588 command_runner.go:130] > # live configuration reload.
	I1213 10:43:19.437199  390588 command_runner.go:130] > # log_level = "info"
	I1213 10:43:19.437216  390588 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1213 10:43:19.437221  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.437232  390588 command_runner.go:130] > # log_filter = ""
	I1213 10:43:19.437241  390588 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1213 10:43:19.437248  390588 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1213 10:43:19.437252  390588 command_runner.go:130] > # separated by comma.
	I1213 10:43:19.437260  390588 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:43:19.437264  390588 command_runner.go:130] > # uid_mappings = ""
	I1213 10:43:19.437270  390588 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1213 10:43:19.437280  390588 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1213 10:43:19.437285  390588 command_runner.go:130] > # separated by comma.
	I1213 10:43:19.437295  390588 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:43:19.437301  390588 command_runner.go:130] > # gid_mappings = ""
	I1213 10:43:19.437308  390588 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1213 10:43:19.437314  390588 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 10:43:19.437320  390588 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 10:43:19.437331  390588 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:43:19.437335  390588 command_runner.go:130] > # minimum_mappable_uid = -1
	I1213 10:43:19.437345  390588 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1213 10:43:19.437354  390588 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 10:43:19.437361  390588 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 10:43:19.437371  390588 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:43:19.437375  390588 command_runner.go:130] > # minimum_mappable_gid = -1
	I1213 10:43:19.437382  390588 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1213 10:43:19.437390  390588 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1213 10:43:19.437396  390588 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1213 10:43:19.437403  390588 command_runner.go:130] > # ctr_stop_timeout = 30
	I1213 10:43:19.437409  390588 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1213 10:43:19.437416  390588 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1213 10:43:19.437423  390588 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1213 10:43:19.437428  390588 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1213 10:43:19.437432  390588 command_runner.go:130] > # drop_infra_ctr = true
	I1213 10:43:19.437441  390588 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1213 10:43:19.437449  390588 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1213 10:43:19.437457  390588 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1213 10:43:19.437473  390588 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1213 10:43:19.437482  390588 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1213 10:43:19.437491  390588 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1213 10:43:19.437497  390588 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1213 10:43:19.437502  390588 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1213 10:43:19.437506  390588 command_runner.go:130] > # shared_cpuset = ""
	I1213 10:43:19.437511  390588 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1213 10:43:19.437519  390588 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1213 10:43:19.437524  390588 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1213 10:43:19.437534  390588 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1213 10:43:19.437546  390588 command_runner.go:130] > # pinns_path = ""
	I1213 10:43:19.437553  390588 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1213 10:43:19.437560  390588 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1213 10:43:19.437567  390588 command_runner.go:130] > # enable_criu_support = true
	I1213 10:43:19.437573  390588 command_runner.go:130] > # Enable/disable the generation of the container,
	I1213 10:43:19.437579  390588 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1213 10:43:19.437586  390588 command_runner.go:130] > # enable_pod_events = false
	I1213 10:43:19.437593  390588 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 10:43:19.437598  390588 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1213 10:43:19.437604  390588 command_runner.go:130] > # default_runtime = "crun"
	I1213 10:43:19.437609  390588 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1213 10:43:19.437619  390588 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1213 10:43:19.437636  390588 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1213 10:43:19.437642  390588 command_runner.go:130] > # creation as a file is not desired either.
	I1213 10:43:19.437653  390588 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1213 10:43:19.437664  390588 command_runner.go:130] > # the hostname is being managed dynamically.
	I1213 10:43:19.437668  390588 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1213 10:43:19.437672  390588 command_runner.go:130] > # ]
	I1213 10:43:19.437678  390588 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1213 10:43:19.437685  390588 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1213 10:43:19.437693  390588 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1213 10:43:19.437708  390588 command_runner.go:130] > # Each entry in the table should follow the format:
	I1213 10:43:19.437715  390588 command_runner.go:130] > #
	I1213 10:43:19.437724  390588 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1213 10:43:19.437729  390588 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1213 10:43:19.437737  390588 command_runner.go:130] > # runtime_type = "oci"
	I1213 10:43:19.437742  390588 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1213 10:43:19.437752  390588 command_runner.go:130] > # inherit_default_runtime = false
	I1213 10:43:19.437760  390588 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1213 10:43:19.437764  390588 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1213 10:43:19.437769  390588 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1213 10:43:19.437775  390588 command_runner.go:130] > # monitor_env = []
	I1213 10:43:19.437780  390588 command_runner.go:130] > # privileged_without_host_devices = false
	I1213 10:43:19.437787  390588 command_runner.go:130] > # allowed_annotations = []
	I1213 10:43:19.437793  390588 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1213 10:43:19.437799  390588 command_runner.go:130] > # no_sync_log = false
	I1213 10:43:19.437803  390588 command_runner.go:130] > # default_annotations = {}
	I1213 10:43:19.437807  390588 command_runner.go:130] > # stream_websockets = false
	I1213 10:43:19.437810  390588 command_runner.go:130] > # seccomp_profile = ""
	I1213 10:43:19.437838  390588 command_runner.go:130] > # Where:
	I1213 10:43:19.437847  390588 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1213 10:43:19.437854  390588 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1213 10:43:19.437860  390588 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1213 10:43:19.437868  390588 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1213 10:43:19.437874  390588 command_runner.go:130] > #   in $PATH.
	I1213 10:43:19.437880  390588 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1213 10:43:19.437888  390588 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1213 10:43:19.437895  390588 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1213 10:43:19.437898  390588 command_runner.go:130] > #   state.
	I1213 10:43:19.437905  390588 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1213 10:43:19.437913  390588 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1213 10:43:19.437920  390588 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1213 10:43:19.437926  390588 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1213 10:43:19.437932  390588 command_runner.go:130] > #   the values from the default runtime on load time.
	I1213 10:43:19.437938  390588 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1213 10:43:19.437949  390588 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1213 10:43:19.437959  390588 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1213 10:43:19.437971  390588 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1213 10:43:19.437976  390588 command_runner.go:130] > #   The currently recognized values are:
	I1213 10:43:19.437983  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1213 10:43:19.437993  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1213 10:43:19.438000  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1213 10:43:19.438006  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1213 10:43:19.438017  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1213 10:43:19.438026  390588 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1213 10:43:19.438042  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1213 10:43:19.438048  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1213 10:43:19.438055  390588 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1213 10:43:19.438064  390588 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1213 10:43:19.438071  390588 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1213 10:43:19.438079  390588 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1213 10:43:19.438091  390588 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1213 10:43:19.438097  390588 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1213 10:43:19.438104  390588 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1213 10:43:19.438114  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1213 10:43:19.438123  390588 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1213 10:43:19.438128  390588 command_runner.go:130] > #   deprecated option "conmon".
	I1213 10:43:19.438135  390588 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1213 10:43:19.438145  390588 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1213 10:43:19.438153  390588 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1213 10:43:19.438160  390588 command_runner.go:130] > #   should be moved to the container's cgroup
	I1213 10:43:19.438168  390588 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1213 10:43:19.438173  390588 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1213 10:43:19.438182  390588 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1213 10:43:19.438186  390588 command_runner.go:130] > #   conmon-rs by using:
	I1213 10:43:19.438194  390588 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1213 10:43:19.438204  390588 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1213 10:43:19.438215  390588 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1213 10:43:19.438228  390588 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1213 10:43:19.438236  390588 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1213 10:43:19.438246  390588 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1213 10:43:19.438254  390588 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1213 10:43:19.438263  390588 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1213 10:43:19.438271  390588 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1213 10:43:19.438280  390588 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1213 10:43:19.438293  390588 command_runner.go:130] > #   when a machine crash happens.
	I1213 10:43:19.438300  390588 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1213 10:43:19.438308  390588 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1213 10:43:19.438322  390588 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1213 10:43:19.438327  390588 command_runner.go:130] > #   seccomp profile for the runtime.
	I1213 10:43:19.438335  390588 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1213 10:43:19.438343  390588 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1213 10:43:19.438346  390588 command_runner.go:130] > #
	I1213 10:43:19.438350  390588 command_runner.go:130] > # Using the seccomp notifier feature:
	I1213 10:43:19.438353  390588 command_runner.go:130] > #
	I1213 10:43:19.438359  390588 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1213 10:43:19.438370  390588 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1213 10:43:19.438376  390588 command_runner.go:130] > #
	I1213 10:43:19.438383  390588 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1213 10:43:19.438392  390588 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1213 10:43:19.438395  390588 command_runner.go:130] > #
	I1213 10:43:19.438401  390588 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1213 10:43:19.438406  390588 command_runner.go:130] > # feature.
	I1213 10:43:19.438410  390588 command_runner.go:130] > #
	I1213 10:43:19.438416  390588 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1213 10:43:19.438422  390588 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1213 10:43:19.438431  390588 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1213 10:43:19.438437  390588 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1213 10:43:19.438447  390588 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1213 10:43:19.438450  390588 command_runner.go:130] > #
	I1213 10:43:19.438456  390588 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1213 10:43:19.438465  390588 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1213 10:43:19.438471  390588 command_runner.go:130] > #
	I1213 10:43:19.438478  390588 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1213 10:43:19.438486  390588 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1213 10:43:19.438491  390588 command_runner.go:130] > #
	I1213 10:43:19.438497  390588 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1213 10:43:19.438512  390588 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1213 10:43:19.438516  390588 command_runner.go:130] > # limitation.
	I1213 10:43:19.438523  390588 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1213 10:43:19.438528  390588 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1213 10:43:19.438533  390588 command_runner.go:130] > runtime_type = ""
	I1213 10:43:19.438539  390588 command_runner.go:130] > runtime_root = "/run/crun"
	I1213 10:43:19.438543  390588 command_runner.go:130] > inherit_default_runtime = false
	I1213 10:43:19.438549  390588 command_runner.go:130] > runtime_config_path = ""
	I1213 10:43:19.438553  390588 command_runner.go:130] > container_min_memory = ""
	I1213 10:43:19.438560  390588 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 10:43:19.438564  390588 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 10:43:19.438577  390588 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 10:43:19.438581  390588 command_runner.go:130] > allowed_annotations = [
	I1213 10:43:19.438586  390588 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1213 10:43:19.438589  390588 command_runner.go:130] > ]
	I1213 10:43:19.438594  390588 command_runner.go:130] > privileged_without_host_devices = false
	I1213 10:43:19.438599  390588 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1213 10:43:19.438604  390588 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1213 10:43:19.438610  390588 command_runner.go:130] > runtime_type = ""
	I1213 10:43:19.438614  390588 command_runner.go:130] > runtime_root = "/run/runc"
	I1213 10:43:19.438617  390588 command_runner.go:130] > inherit_default_runtime = false
	I1213 10:43:19.438621  390588 command_runner.go:130] > runtime_config_path = ""
	I1213 10:43:19.438625  390588 command_runner.go:130] > container_min_memory = ""
	I1213 10:43:19.438633  390588 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 10:43:19.438639  390588 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 10:43:19.438644  390588 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 10:43:19.438649  390588 command_runner.go:130] > privileged_without_host_devices = false
	I1213 10:43:19.438664  390588 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1213 10:43:19.438673  390588 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1213 10:43:19.438684  390588 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1213 10:43:19.438692  390588 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1213 10:43:19.438702  390588 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1213 10:43:19.438712  390588 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1213 10:43:19.438728  390588 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1213 10:43:19.438734  390588 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1213 10:43:19.438743  390588 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1213 10:43:19.438755  390588 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1213 10:43:19.438761  390588 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1213 10:43:19.438772  390588 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1213 10:43:19.438778  390588 command_runner.go:130] > # Example:
	I1213 10:43:19.438782  390588 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1213 10:43:19.438787  390588 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1213 10:43:19.438793  390588 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1213 10:43:19.438801  390588 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1213 10:43:19.438806  390588 command_runner.go:130] > # cpuset = "0-1"
	I1213 10:43:19.438810  390588 command_runner.go:130] > # cpushares = "5"
	I1213 10:43:19.438814  390588 command_runner.go:130] > # cpuquota = "1000"
	I1213 10:43:19.438820  390588 command_runner.go:130] > # cpuperiod = "100000"
	I1213 10:43:19.438825  390588 command_runner.go:130] > # cpulimit = "35"
	I1213 10:43:19.438837  390588 command_runner.go:130] > # Where:
	I1213 10:43:19.438841  390588 command_runner.go:130] > # The workload name is workload-type.
	I1213 10:43:19.438852  390588 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1213 10:43:19.438861  390588 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1213 10:43:19.438866  390588 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1213 10:43:19.438875  390588 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1213 10:43:19.438880  390588 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1213 10:43:19.438885  390588 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1213 10:43:19.438894  390588 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1213 10:43:19.438905  390588 command_runner.go:130] > # Default value is set to true
	I1213 10:43:19.438910  390588 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1213 10:43:19.438915  390588 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1213 10:43:19.438925  390588 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1213 10:43:19.438932  390588 command_runner.go:130] > # Default value is set to 'false'
	I1213 10:43:19.438938  390588 command_runner.go:130] > # disable_hostport_mapping = false
	I1213 10:43:19.438943  390588 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1213 10:43:19.438951  390588 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1213 10:43:19.438954  390588 command_runner.go:130] > # timezone = ""
	I1213 10:43:19.438961  390588 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1213 10:43:19.438967  390588 command_runner.go:130] > #
	I1213 10:43:19.438973  390588 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1213 10:43:19.438979  390588 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1213 10:43:19.438983  390588 command_runner.go:130] > [crio.image]
	I1213 10:43:19.438993  390588 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1213 10:43:19.438999  390588 command_runner.go:130] > # default_transport = "docker://"
	I1213 10:43:19.439005  390588 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1213 10:43:19.439015  390588 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1213 10:43:19.439019  390588 command_runner.go:130] > # global_auth_file = ""
	I1213 10:43:19.439024  390588 command_runner.go:130] > # The image used to instantiate infra containers.
	I1213 10:43:19.439029  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.439034  390588 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1213 10:43:19.439040  390588 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1213 10:43:19.439048  390588 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1213 10:43:19.439055  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.439060  390588 command_runner.go:130] > # pause_image_auth_file = ""
	I1213 10:43:19.439066  390588 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1213 10:43:19.439072  390588 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1213 10:43:19.439081  390588 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1213 10:43:19.439087  390588 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1213 10:43:19.439094  390588 command_runner.go:130] > # pause_command = "/pause"
	I1213 10:43:19.439100  390588 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1213 10:43:19.439106  390588 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1213 10:43:19.439111  390588 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1213 10:43:19.439117  390588 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1213 10:43:19.439123  390588 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1213 10:43:19.439134  390588 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1213 10:43:19.439142  390588 command_runner.go:130] > # pinned_images = [
	I1213 10:43:19.439145  390588 command_runner.go:130] > # ]
	I1213 10:43:19.439151  390588 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1213 10:43:19.439157  390588 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1213 10:43:19.439166  390588 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1213 10:43:19.439172  390588 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1213 10:43:19.439180  390588 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1213 10:43:19.439184  390588 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1213 10:43:19.439190  390588 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1213 10:43:19.439197  390588 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1213 10:43:19.439203  390588 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1213 10:43:19.439209  390588 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1213 10:43:19.439223  390588 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1213 10:43:19.439228  390588 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1213 10:43:19.439234  390588 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1213 10:43:19.439243  390588 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1213 10:43:19.439247  390588 command_runner.go:130] > # changing them here.
	I1213 10:43:19.439253  390588 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1213 10:43:19.439260  390588 command_runner.go:130] > # insecure_registries = [
	I1213 10:43:19.439263  390588 command_runner.go:130] > # ]
	I1213 10:43:19.439268  390588 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1213 10:43:19.439273  390588 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1213 10:43:19.439723  390588 command_runner.go:130] > # image_volumes = "mkdir"
	I1213 10:43:19.439741  390588 command_runner.go:130] > # Temporary directory to use for storing big files
	I1213 10:43:19.439879  390588 command_runner.go:130] > # big_files_temporary_dir = ""
	I1213 10:43:19.439918  390588 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1213 10:43:19.439927  390588 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1213 10:43:19.439931  390588 command_runner.go:130] > # auto_reload_registries = false
	I1213 10:43:19.439937  390588 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1213 10:43:19.439946  390588 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1213 10:43:19.439958  390588 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1213 10:43:19.439963  390588 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1213 10:43:19.439974  390588 command_runner.go:130] > # The mode of short name resolution.
	I1213 10:43:19.439985  390588 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1213 10:43:19.439993  390588 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1213 10:43:19.440002  390588 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1213 10:43:19.440006  390588 command_runner.go:130] > # short_name_mode = "enforcing"
	I1213 10:43:19.440012  390588 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1213 10:43:19.440018  390588 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1213 10:43:19.440023  390588 command_runner.go:130] > # oci_artifact_mount_support = true
	I1213 10:43:19.440029  390588 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1213 10:43:19.440034  390588 command_runner.go:130] > # CNI plugins.
	I1213 10:43:19.440037  390588 command_runner.go:130] > [crio.network]
	I1213 10:43:19.440044  390588 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1213 10:43:19.440053  390588 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1213 10:43:19.440058  390588 command_runner.go:130] > # cni_default_network = ""
	I1213 10:43:19.440064  390588 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1213 10:43:19.440073  390588 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1213 10:43:19.440080  390588 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1213 10:43:19.440084  390588 command_runner.go:130] > # plugin_dirs = [
	I1213 10:43:19.440211  390588 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1213 10:43:19.440357  390588 command_runner.go:130] > # ]
	I1213 10:43:19.440384  390588 command_runner.go:130] > # List of included pod metrics.
	I1213 10:43:19.440392  390588 command_runner.go:130] > # included_pod_metrics = [
	I1213 10:43:19.440401  390588 command_runner.go:130] > # ]
	I1213 10:43:19.440408  390588 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1213 10:43:19.440418  390588 command_runner.go:130] > [crio.metrics]
	I1213 10:43:19.440423  390588 command_runner.go:130] > # Globally enable or disable metrics support.
	I1213 10:43:19.440436  390588 command_runner.go:130] > # enable_metrics = false
	I1213 10:43:19.440441  390588 command_runner.go:130] > # Specify enabled metrics collectors.
	I1213 10:43:19.440446  390588 command_runner.go:130] > # Per default all metrics are enabled.
	I1213 10:43:19.440452  390588 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1213 10:43:19.440460  390588 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1213 10:43:19.440472  390588 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1213 10:43:19.440477  390588 command_runner.go:130] > # metrics_collectors = [
	I1213 10:43:19.440481  390588 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1213 10:43:19.440496  390588 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1213 10:43:19.440501  390588 command_runner.go:130] > # 	"containers_oom_total",
	I1213 10:43:19.440506  390588 command_runner.go:130] > # 	"processes_defunct",
	I1213 10:43:19.440509  390588 command_runner.go:130] > # 	"operations_total",
	I1213 10:43:19.440637  390588 command_runner.go:130] > # 	"operations_latency_seconds",
	I1213 10:43:19.440664  390588 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1213 10:43:19.440670  390588 command_runner.go:130] > # 	"operations_errors_total",
	I1213 10:43:19.440688  390588 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1213 10:43:19.440696  390588 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1213 10:43:19.440701  390588 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1213 10:43:19.440705  390588 command_runner.go:130] > # 	"image_pulls_success_total",
	I1213 10:43:19.440716  390588 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1213 10:43:19.440720  390588 command_runner.go:130] > # 	"containers_oom_count_total",
	I1213 10:43:19.440726  390588 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1213 10:43:19.440734  390588 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1213 10:43:19.440739  390588 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1213 10:43:19.440742  390588 command_runner.go:130] > # ]
	I1213 10:43:19.440749  390588 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1213 10:43:19.440758  390588 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1213 10:43:19.440764  390588 command_runner.go:130] > # The port on which the metrics server will listen.
	I1213 10:43:19.440768  390588 command_runner.go:130] > # metrics_port = 9090
	I1213 10:43:19.440773  390588 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1213 10:43:19.440901  390588 command_runner.go:130] > # metrics_socket = ""
	I1213 10:43:19.440915  390588 command_runner.go:130] > # The certificate for the secure metrics server.
	I1213 10:43:19.440937  390588 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1213 10:43:19.440950  390588 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1213 10:43:19.440955  390588 command_runner.go:130] > # certificate on any modification event.
	I1213 10:43:19.440959  390588 command_runner.go:130] > # metrics_cert = ""
	I1213 10:43:19.440964  390588 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1213 10:43:19.440969  390588 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1213 10:43:19.440972  390588 command_runner.go:130] > # metrics_key = ""
	I1213 10:43:19.440978  390588 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1213 10:43:19.440982  390588 command_runner.go:130] > [crio.tracing]
	I1213 10:43:19.440995  390588 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1213 10:43:19.441000  390588 command_runner.go:130] > # enable_tracing = false
	I1213 10:43:19.441006  390588 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1213 10:43:19.441015  390588 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1213 10:43:19.441022  390588 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1213 10:43:19.441031  390588 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1213 10:43:19.441039  390588 command_runner.go:130] > # CRI-O NRI configuration.
	I1213 10:43:19.441042  390588 command_runner.go:130] > [crio.nri]
	I1213 10:43:19.441047  390588 command_runner.go:130] > # Globally enable or disable NRI.
	I1213 10:43:19.441253  390588 command_runner.go:130] > # enable_nri = true
	I1213 10:43:19.441268  390588 command_runner.go:130] > # NRI socket to listen on.
	I1213 10:43:19.441274  390588 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1213 10:43:19.441278  390588 command_runner.go:130] > # NRI plugin directory to use.
	I1213 10:43:19.441283  390588 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1213 10:43:19.441288  390588 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1213 10:43:19.441293  390588 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1213 10:43:19.441298  390588 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1213 10:43:19.441355  390588 command_runner.go:130] > # nri_disable_connections = false
	I1213 10:43:19.441365  390588 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1213 10:43:19.441370  390588 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1213 10:43:19.441374  390588 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1213 10:43:19.441379  390588 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1213 10:43:19.441384  390588 command_runner.go:130] > # NRI default validator configuration.
	I1213 10:43:19.441391  390588 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1213 10:43:19.441401  390588 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1213 10:43:19.441405  390588 command_runner.go:130] > # can be restricted/rejected:
	I1213 10:43:19.441417  390588 command_runner.go:130] > # - OCI hook injection
	I1213 10:43:19.441427  390588 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1213 10:43:19.441435  390588 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1213 10:43:19.441440  390588 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1213 10:43:19.441444  390588 command_runner.go:130] > # - adjustment of linux namespaces
	I1213 10:43:19.441453  390588 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1213 10:43:19.441460  390588 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1213 10:43:19.441466  390588 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1213 10:43:19.441469  390588 command_runner.go:130] > #
	I1213 10:43:19.441473  390588 command_runner.go:130] > # [crio.nri.default_validator]
	I1213 10:43:19.441480  390588 command_runner.go:130] > # nri_enable_default_validator = false
	I1213 10:43:19.441485  390588 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1213 10:43:19.441629  390588 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1213 10:43:19.441658  390588 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1213 10:43:19.441671  390588 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1213 10:43:19.441677  390588 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1213 10:43:19.441685  390588 command_runner.go:130] > # nri_validator_required_plugins = [
	I1213 10:43:19.441688  390588 command_runner.go:130] > # ]
	I1213 10:43:19.441694  390588 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1213 10:43:19.441700  390588 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1213 10:43:19.441709  390588 command_runner.go:130] > [crio.stats]
	I1213 10:43:19.441720  390588 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1213 10:43:19.441730  390588 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1213 10:43:19.441734  390588 command_runner.go:130] > # stats_collection_period = 0
	I1213 10:43:19.441743  390588 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1213 10:43:19.441752  390588 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1213 10:43:19.441756  390588 command_runner.go:130] > # collection_period = 0
	I1213 10:43:19.443275  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.403988128Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1213 10:43:19.443305  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404025092Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1213 10:43:19.443315  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404051931Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1213 10:43:19.443326  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404076596Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1213 10:43:19.443340  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404148548Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:19.443352  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404414955Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1213 10:43:19.443364  390588 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1213 10:43:19.443836  390588 cni.go:84] Creating CNI manager for ""
	I1213 10:43:19.443854  390588 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:43:19.443875  390588 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:43:19.443898  390588 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-407525 NodeName:functional-407525 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:43:19.444025  390588 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-407525"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:43:19.444095  390588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:43:19.450891  390588 command_runner.go:130] > kubeadm
	I1213 10:43:19.450967  390588 command_runner.go:130] > kubectl
	I1213 10:43:19.450987  390588 command_runner.go:130] > kubelet
	I1213 10:43:19.451803  390588 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:43:19.451864  390588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:43:19.459352  390588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 10:43:19.471938  390588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:43:19.485136  390588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 10:43:19.498010  390588 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:43:19.501925  390588 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 10:43:19.502045  390588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:43:19.620049  390588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:43:20.022042  390588 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525 for IP: 192.168.49.2
	I1213 10:43:20.022188  390588 certs.go:195] generating shared ca certs ...
	I1213 10:43:20.022221  390588 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:43:20.022446  390588 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 10:43:20.022567  390588 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 10:43:20.022606  390588 certs.go:257] generating profile certs ...
	I1213 10:43:20.022771  390588 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key
	I1213 10:43:20.022893  390588 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key.2185ee04
	I1213 10:43:20.023000  390588 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key
	I1213 10:43:20.023048  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 10:43:20.023081  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 10:43:20.023123  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 10:43:20.023158  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 10:43:20.023202  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 10:43:20.023238  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 10:43:20.023279  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 10:43:20.023318  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 10:43:20.023431  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 10:43:20.023496  390588 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 10:43:20.023540  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:43:20.023607  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:43:20.023670  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:43:20.023728  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 10:43:20.023828  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 10:43:20.023897  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.023941  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem -> /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.023985  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.024591  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:43:20.049939  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:43:20.071962  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:43:20.093520  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:43:20.117621  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:43:20.135349  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:43:20.152883  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:43:20.170121  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 10:43:20.188254  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:43:20.205892  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 10:43:20.223561  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 10:43:20.241467  390588 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:43:20.254691  390588 ssh_runner.go:195] Run: openssl version
	I1213 10:43:20.260777  390588 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 10:43:20.261193  390588 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.268769  390588 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 10:43:20.276440  390588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.280293  390588 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.280332  390588 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.280379  390588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.320848  390588 command_runner.go:130] > 3ec20f2e
	I1213 10:43:20.321296  390588 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:43:20.328708  390588 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.335901  390588 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:43:20.343392  390588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.347019  390588 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.347264  390588 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.347323  390588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.388019  390588 command_runner.go:130] > b5213941
	I1213 10:43:20.388604  390588 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:43:20.396066  390588 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.403389  390588 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 10:43:20.410914  390588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.414772  390588 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.414823  390588 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.414888  390588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.455731  390588 command_runner.go:130] > 51391683
	I1213 10:43:20.456248  390588 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:43:20.463583  390588 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:43:20.467136  390588 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:43:20.467160  390588 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 10:43:20.467167  390588 command_runner.go:130] > Device: 259,1	Inode: 1322536     Links: 1
	I1213 10:43:20.467174  390588 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:43:20.467180  390588 command_runner.go:130] > Access: 2025-12-13 10:39:12.482590700 +0000
	I1213 10:43:20.467186  390588 command_runner.go:130] > Modify: 2025-12-13 10:35:08.216365089 +0000
	I1213 10:43:20.467191  390588 command_runner.go:130] > Change: 2025-12-13 10:35:08.216365089 +0000
	I1213 10:43:20.467197  390588 command_runner.go:130] >  Birth: 2025-12-13 10:35:08.216365089 +0000
	I1213 10:43:20.467264  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:43:20.507794  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.508276  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:43:20.549373  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.549450  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:43:20.591501  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.592041  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:43:20.633163  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.633239  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:43:20.673681  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.674235  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:43:20.714863  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.715372  390588 kubeadm.go:401] StartCluster: {Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:43:20.715472  390588 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:43:20.715572  390588 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:43:20.742591  390588 cri.go:89] found id: ""
	I1213 10:43:20.742663  390588 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:43:20.749676  390588 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 10:43:20.749696  390588 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 10:43:20.749703  390588 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 10:43:20.750605  390588 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:43:20.750650  390588 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:43:20.750723  390588 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:43:20.758246  390588 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:43:20.758662  390588 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-407525" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:20.758765  390588 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-354468/kubeconfig needs updating (will repair): [kubeconfig missing "functional-407525" cluster setting kubeconfig missing "functional-407525" context setting]
	I1213 10:43:20.759076  390588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:43:20.759474  390588 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:20.759724  390588 kapi.go:59] client config for functional-407525: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key", CAFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:43:20.760259  390588 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 10:43:20.760282  390588 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 10:43:20.760289  390588 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 10:43:20.760294  390588 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 10:43:20.760299  390588 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 10:43:20.760595  390588 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:43:20.760675  390588 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 10:43:20.768313  390588 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 10:43:20.768394  390588 kubeadm.go:602] duration metric: took 17.723293ms to restartPrimaryControlPlane
	I1213 10:43:20.768419  390588 kubeadm.go:403] duration metric: took 53.05457ms to StartCluster
	I1213 10:43:20.768469  390588 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:43:20.768581  390588 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:20.769195  390588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:43:20.769470  390588 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:43:20.769730  390588 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:43:20.769792  390588 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:43:20.769868  390588 addons.go:70] Setting storage-provisioner=true in profile "functional-407525"
	I1213 10:43:20.769887  390588 addons.go:239] Setting addon storage-provisioner=true in "functional-407525"
	I1213 10:43:20.769967  390588 host.go:66] Checking if "functional-407525" exists ...
	I1213 10:43:20.770424  390588 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:43:20.770582  390588 addons.go:70] Setting default-storageclass=true in profile "functional-407525"
	I1213 10:43:20.770602  390588 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-407525"
	I1213 10:43:20.770845  390588 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:43:20.776047  390588 out.go:179] * Verifying Kubernetes components...
	I1213 10:43:20.778873  390588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:43:20.803376  390588 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:43:20.806823  390588 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:20.806848  390588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:43:20.806911  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:20.815503  390588 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:20.815748  390588 kapi.go:59] client config for functional-407525: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key", CAFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:43:20.816048  390588 addons.go:239] Setting addon default-storageclass=true in "functional-407525"
	I1213 10:43:20.816085  390588 host.go:66] Checking if "functional-407525" exists ...
	I1213 10:43:20.816499  390588 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:43:20.849236  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:20.860497  390588 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:20.860524  390588 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:43:20.860587  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:20.893135  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:20.991835  390588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:43:21.017033  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:21.050080  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:21.773497  390588 node_ready.go:35] waiting up to 6m0s for node "functional-407525" to be "Ready" ...
	I1213 10:43:21.773656  390588 type.go:168] "Request Body" body=""
	I1213 10:43:21.773729  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:21.774009  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:21.774035  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:21.774063  390588 retry.go:31] will retry after 178.71376ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:21.774107  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:21.774121  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:21.774127  390588 retry.go:31] will retry after 267.498ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:21.774194  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:21.953713  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:22.014320  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.018022  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.018057  390588 retry.go:31] will retry after 328.520116ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.042240  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:22.097866  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.101425  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.101460  390588 retry.go:31] will retry after 340.23882ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.273721  390588 type.go:168] "Request Body" body=""
	I1213 10:43:22.273821  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:22.274173  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:22.347588  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:22.405090  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.408724  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.408759  390588 retry.go:31] will retry after 330.053163ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.441890  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:22.497250  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.500831  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.500864  390588 retry.go:31] will retry after 301.657591ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.739051  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:22.774467  390588 type.go:168] "Request Body" body=""
	I1213 10:43:22.774545  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:22.774882  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:22.796776  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.800408  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.800485  390588 retry.go:31] will retry after 1.110001612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.803607  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:22.863746  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.863797  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.863816  390588 retry.go:31] will retry after 925.323482ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:23.274339  390588 type.go:168] "Request Body" body=""
	I1213 10:43:23.274464  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:23.274793  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:23.774657  390588 type.go:168] "Request Body" body=""
	I1213 10:43:23.774742  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:23.775115  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:23.775193  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:23.789322  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:23.850165  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:23.853613  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:23.853701  390588 retry.go:31] will retry after 1.468677433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:23.910870  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:23.967004  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:23.970690  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:23.970723  390588 retry.go:31] will retry after 1.30336677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:24.274187  390588 type.go:168] "Request Body" body=""
	I1213 10:43:24.274270  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:24.274613  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:24.773719  390588 type.go:168] "Request Body" body=""
	I1213 10:43:24.773812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:24.774104  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:25.273868  390588 type.go:168] "Request Body" body=""
	I1213 10:43:25.273973  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:25.274299  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:25.274422  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:25.322752  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:25.335088  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:25.335126  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:25.335146  390588 retry.go:31] will retry after 1.31175111s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:25.389173  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:25.389228  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:25.389247  390588 retry.go:31] will retry after 1.937290048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:25.773818  390588 type.go:168] "Request Body" body=""
	I1213 10:43:25.773896  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:25.774238  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:26.274714  390588 type.go:168] "Request Body" body=""
	I1213 10:43:26.274790  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:26.275116  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:26.275175  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:26.647823  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:26.708762  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:26.708815  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:26.708835  390588 retry.go:31] will retry after 2.338895321s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:26.773966  390588 type.go:168] "Request Body" body=""
	I1213 10:43:26.774052  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:26.774373  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:27.273820  390588 type.go:168] "Request Body" body=""
	I1213 10:43:27.273894  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:27.274223  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:27.327657  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:27.389087  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:27.389124  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:27.389154  390588 retry.go:31] will retry after 3.77996712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:27.774250  390588 type.go:168] "Request Body" body=""
	I1213 10:43:27.774347  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:27.774610  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:28.274520  390588 type.go:168] "Request Body" body=""
	I1213 10:43:28.274639  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:28.275025  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:28.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:43:28.773830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:28.774175  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:28.774230  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:29.048671  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:29.108913  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:29.108956  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:29.108976  390588 retry.go:31] will retry after 6.196055786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:29.274133  390588 type.go:168] "Request Body" body=""
	I1213 10:43:29.274210  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:29.274535  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:29.774410  390588 type.go:168] "Request Body" body=""
	I1213 10:43:29.774493  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:29.774856  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:30.274678  390588 type.go:168] "Request Body" body=""
	I1213 10:43:30.274752  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:30.275098  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:30.774546  390588 type.go:168] "Request Body" body=""
	I1213 10:43:30.774615  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:30.774881  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:30.774922  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:31.169380  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:31.223779  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:31.227282  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:31.227315  390588 retry.go:31] will retry after 4.701439473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:31.274644  390588 type.go:168] "Request Body" body=""
	I1213 10:43:31.274723  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:31.275035  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:31.773748  390588 type.go:168] "Request Body" body=""
	I1213 10:43:31.773838  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:31.774143  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:32.273714  390588 type.go:168] "Request Body" body=""
	I1213 10:43:32.273813  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:32.274119  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:32.773782  390588 type.go:168] "Request Body" body=""
	I1213 10:43:32.773855  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:32.774160  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:33.273748  390588 type.go:168] "Request Body" body=""
	I1213 10:43:33.273823  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:33.274181  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:33.274234  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:33.773733  390588 type.go:168] "Request Body" body=""
	I1213 10:43:33.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:33.774115  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:34.273812  390588 type.go:168] "Request Body" body=""
	I1213 10:43:34.273904  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:34.274296  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:34.773742  390588 type.go:168] "Request Body" body=""
	I1213 10:43:34.773818  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:34.774139  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:35.273828  390588 type.go:168] "Request Body" body=""
	I1213 10:43:35.273922  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:35.274192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:35.305578  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:35.371590  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:35.371636  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:35.371657  390588 retry.go:31] will retry after 5.458500829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:35.773766  390588 type.go:168] "Request Body" body=""
	I1213 10:43:35.773846  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:35.774186  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:35.774236  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:35.929536  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:35.989448  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:35.989487  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:35.989506  390588 retry.go:31] will retry after 5.007301518s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:36.274095  390588 type.go:168] "Request Body" body=""
	I1213 10:43:36.274168  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:36.274462  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:36.774043  390588 type.go:168] "Request Body" body=""
	I1213 10:43:36.774126  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:36.774417  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:37.273790  390588 type.go:168] "Request Body" body=""
	I1213 10:43:37.273882  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:37.274210  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:37.773915  390588 type.go:168] "Request Body" body=""
	I1213 10:43:37.773996  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:37.774325  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:37.774386  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:38.274036  390588 type.go:168] "Request Body" body=""
	I1213 10:43:38.274110  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:38.274365  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:38.773780  390588 type.go:168] "Request Body" body=""
	I1213 10:43:38.773871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:38.774179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:39.273872  390588 type.go:168] "Request Body" body=""
	I1213 10:43:39.273948  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:39.274270  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:39.773709  390588 type.go:168] "Request Body" body=""
	I1213 10:43:39.773784  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:39.774053  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:40.273825  390588 type.go:168] "Request Body" body=""
	I1213 10:43:40.273899  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:40.274244  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:40.274309  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:40.774007  390588 type.go:168] "Request Body" body=""
	I1213 10:43:40.774083  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:40.774431  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:40.830857  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:40.888820  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:40.888869  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:40.888889  390588 retry.go:31] will retry after 11.437774943s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:40.997102  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:41.058447  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:41.058511  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:41.058532  390588 retry.go:31] will retry after 7.34875984s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:41.275648  390588 type.go:168] "Request Body" body=""
	I1213 10:43:41.275736  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:41.275995  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:41.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:43:41.773833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:41.774173  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:42.273927  390588 type.go:168] "Request Body" body=""
	I1213 10:43:42.274020  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:42.274372  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:42.274432  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:42.773693  390588 type.go:168] "Request Body" body=""
	I1213 10:43:42.773768  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:42.774092  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:43.273808  390588 type.go:168] "Request Body" body=""
	I1213 10:43:43.273880  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:43.274204  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:43.773920  390588 type.go:168] "Request Body" body=""
	I1213 10:43:43.774021  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:43.774340  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:44.274591  390588 type.go:168] "Request Body" body=""
	I1213 10:43:44.274666  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:44.274925  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:44.274974  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:44.773692  390588 type.go:168] "Request Body" body=""
	I1213 10:43:44.773775  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:44.774117  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:45.273902  390588 type.go:168] "Request Body" body=""
	I1213 10:43:45.273985  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:45.274305  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:45.773737  390588 type.go:168] "Request Body" body=""
	I1213 10:43:45.773808  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:45.774115  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:46.273797  390588 type.go:168] "Request Body" body=""
	I1213 10:43:46.273879  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:46.274217  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:46.774024  390588 type.go:168] "Request Body" body=""
	I1213 10:43:46.774120  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:46.774453  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:46.774515  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:47.273671  390588 type.go:168] "Request Body" body=""
	I1213 10:43:47.273742  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:47.274050  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:47.773764  390588 type.go:168] "Request Body" body=""
	I1213 10:43:47.773857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:47.774219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:48.273933  390588 type.go:168] "Request Body" body=""
	I1213 10:43:48.274033  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:48.274397  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:48.407754  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:48.470395  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:48.474021  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:48.474053  390588 retry.go:31] will retry after 19.108505533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:48.774398  390588 type.go:168] "Request Body" body=""
	I1213 10:43:48.774473  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:48.774751  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:48.774803  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:49.274554  390588 type.go:168] "Request Body" body=""
	I1213 10:43:49.274627  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:49.274988  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:49.773726  390588 type.go:168] "Request Body" body=""
	I1213 10:43:49.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:49.774191  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:50.273886  390588 type.go:168] "Request Body" body=""
	I1213 10:43:50.273967  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:50.274244  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:50.774213  390588 type.go:168] "Request Body" body=""
	I1213 10:43:50.774312  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:50.774666  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:51.274525  390588 type.go:168] "Request Body" body=""
	I1213 10:43:51.274611  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:51.274924  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:51.274971  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:51.774634  390588 type.go:168] "Request Body" body=""
	I1213 10:43:51.774715  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:51.774977  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:52.273715  390588 type.go:168] "Request Body" body=""
	I1213 10:43:52.273797  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:52.274174  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:52.327551  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:52.388989  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:52.389038  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:52.389058  390588 retry.go:31] will retry after 15.332526016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:52.774665  390588 type.go:168] "Request Body" body=""
	I1213 10:43:52.774747  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:52.775066  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:53.273766  390588 type.go:168] "Request Body" body=""
	I1213 10:43:53.273838  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:53.274095  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:53.773791  390588 type.go:168] "Request Body" body=""
	I1213 10:43:53.773894  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:53.774202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:53.774258  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:54.273942  390588 type.go:168] "Request Body" body=""
	I1213 10:43:54.274024  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:54.274379  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:54.774619  390588 type.go:168] "Request Body" body=""
	I1213 10:43:54.774685  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:54.774981  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:55.273695  390588 type.go:168] "Request Body" body=""
	I1213 10:43:55.273772  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:55.274098  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:55.774730  390588 type.go:168] "Request Body" body=""
	I1213 10:43:55.774809  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:55.775152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:55.775209  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:56.273860  390588 type.go:168] "Request Body" body=""
	I1213 10:43:56.273937  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:56.274197  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:56.773778  390588 type.go:168] "Request Body" body=""
	I1213 10:43:56.773872  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:56.774188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:57.273779  390588 type.go:168] "Request Body" body=""
	I1213 10:43:57.273871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:57.274186  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:57.774399  390588 type.go:168] "Request Body" body=""
	I1213 10:43:57.774475  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:57.774745  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:58.274628  390588 type.go:168] "Request Body" body=""
	I1213 10:43:58.274703  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:58.275023  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:58.275075  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:58.773728  390588 type.go:168] "Request Body" body=""
	I1213 10:43:58.773808  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:58.774138  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:59.274411  390588 type.go:168] "Request Body" body=""
	I1213 10:43:59.274483  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:59.274749  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:59.774554  390588 type.go:168] "Request Body" body=""
	I1213 10:43:59.774628  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:59.774978  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:00.273734  390588 type.go:168] "Request Body" body=""
	I1213 10:44:00.273827  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:00.274198  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:00.774634  390588 type.go:168] "Request Body" body=""
	I1213 10:44:00.774714  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:00.775059  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:00.775121  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:01.273670  390588 type.go:168] "Request Body" body=""
	I1213 10:44:01.273742  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:01.274061  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:01.773708  390588 type.go:168] "Request Body" body=""
	I1213 10:44:01.773778  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:01.774062  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:02.273799  390588 type.go:168] "Request Body" body=""
	I1213 10:44:02.273872  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:02.274204  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:02.773760  390588 type.go:168] "Request Body" body=""
	I1213 10:44:02.773840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:02.774185  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:03.273713  390588 type.go:168] "Request Body" body=""
	I1213 10:44:03.273804  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:03.274108  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:03.274159  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:03.773781  390588 type.go:168] "Request Body" body=""
	I1213 10:44:03.773856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:03.774368  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:04.273809  390588 type.go:168] "Request Body" body=""
	I1213 10:44:04.273910  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:04.274228  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:04.773901  390588 type.go:168] "Request Body" body=""
	I1213 10:44:04.773977  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:04.774242  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:05.273787  390588 type.go:168] "Request Body" body=""
	I1213 10:44:05.273861  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:05.274193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:05.274252  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:05.773910  390588 type.go:168] "Request Body" body=""
	I1213 10:44:05.774005  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:05.774314  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:06.274302  390588 type.go:168] "Request Body" body=""
	I1213 10:44:06.274372  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:06.274644  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:06.774485  390588 type.go:168] "Request Body" body=""
	I1213 10:44:06.774567  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:06.774982  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:07.273730  390588 type.go:168] "Request Body" body=""
	I1213 10:44:07.273828  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:07.274146  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:07.583825  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:44:07.646535  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:07.646580  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:07.646600  390588 retry.go:31] will retry after 14.697551715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:07.722798  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:44:07.774314  390588 type.go:168] "Request Body" body=""
	I1213 10:44:07.774386  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:07.774682  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:07.774739  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:07.791129  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:07.791173  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:07.791194  390588 retry.go:31] will retry after 13.531528334s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:08.273899  390588 type.go:168] "Request Body" body=""
	I1213 10:44:08.273980  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:08.274336  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:08.774067  390588 type.go:168] "Request Body" body=""
	I1213 10:44:08.774147  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:08.774508  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:09.274290  390588 type.go:168] "Request Body" body=""
	I1213 10:44:09.274369  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:09.274678  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:09.774447  390588 type.go:168] "Request Body" body=""
	I1213 10:44:09.774528  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:09.774864  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:09.774936  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:10.274570  390588 type.go:168] "Request Body" body=""
	I1213 10:44:10.274657  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:10.274961  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:10.774562  390588 type.go:168] "Request Body" body=""
	I1213 10:44:10.774642  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:10.774915  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:11.273679  390588 type.go:168] "Request Body" body=""
	I1213 10:44:11.273789  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:11.274110  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:11.773783  390588 type.go:168] "Request Body" body=""
	I1213 10:44:11.773865  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:11.774164  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:12.273719  390588 type.go:168] "Request Body" body=""
	I1213 10:44:12.273786  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:12.274058  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:12.274098  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:12.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:44:12.773833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:12.774136  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:13.273776  390588 type.go:168] "Request Body" body=""
	I1213 10:44:13.273875  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:13.274215  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:13.773721  390588 type.go:168] "Request Body" body=""
	I1213 10:44:13.773787  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:13.774066  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:14.273794  390588 type.go:168] "Request Body" body=""
	I1213 10:44:14.273871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:14.274227  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:14.274283  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:14.773929  390588 type.go:168] "Request Body" body=""
	I1213 10:44:14.774010  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:14.774363  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:15.273657  390588 type.go:168] "Request Body" body=""
	I1213 10:44:15.273724  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:15.273985  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:15.773757  390588 type.go:168] "Request Body" body=""
	I1213 10:44:15.773863  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:15.774190  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:16.274139  390588 type.go:168] "Request Body" body=""
	I1213 10:44:16.274221  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:16.274567  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:16.274622  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:16.774305  390588 type.go:168] "Request Body" body=""
	I1213 10:44:16.774378  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:16.774644  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:17.274446  390588 type.go:168] "Request Body" body=""
	I1213 10:44:17.274528  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:17.274866  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:17.774497  390588 type.go:168] "Request Body" body=""
	I1213 10:44:17.774575  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:17.774899  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:18.274657  390588 type.go:168] "Request Body" body=""
	I1213 10:44:18.274734  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:18.275051  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:18.275096  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:18.773787  390588 type.go:168] "Request Body" body=""
	I1213 10:44:18.773872  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:18.774209  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:19.273910  390588 type.go:168] "Request Body" body=""
	I1213 10:44:19.273985  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:19.274345  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:19.774026  390588 type.go:168] "Request Body" body=""
	I1213 10:44:19.774099  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:19.774355  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:20.273801  390588 type.go:168] "Request Body" body=""
	I1213 10:44:20.273913  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:20.274223  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:20.773981  390588 type.go:168] "Request Body" body=""
	I1213 10:44:20.774053  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:20.774366  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:20.774423  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:21.274357  390588 type.go:168] "Request Body" body=""
	I1213 10:44:21.274428  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:21.274706  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:21.323061  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:44:21.389635  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:21.389682  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:21.389701  390588 retry.go:31] will retry after 37.789083594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:21.773791  390588 type.go:168] "Request Body" body=""
	I1213 10:44:21.773876  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:21.774224  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:22.273915  390588 type.go:168] "Request Body" body=""
	I1213 10:44:22.273997  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:22.274345  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:22.344570  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:44:22.405449  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:22.405493  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:22.405512  390588 retry.go:31] will retry after 23.725920264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:22.773711  390588 type.go:168] "Request Body" body=""
	I1213 10:44:22.773782  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:22.774033  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:23.273757  390588 type.go:168] "Request Body" body=""
	I1213 10:44:23.273859  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:23.274206  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:23.274261  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:23.773694  390588 type.go:168] "Request Body" body=""
	I1213 10:44:23.773766  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:23.774054  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:24.274441  390588 type.go:168] "Request Body" body=""
	I1213 10:44:24.274518  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:24.274774  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:24.774608  390588 type.go:168] "Request Body" body=""
	I1213 10:44:24.774678  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:24.774999  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:25.274658  390588 type.go:168] "Request Body" body=""
	I1213 10:44:25.274733  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:25.275077  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:25.275131  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:25.774431  390588 type.go:168] "Request Body" body=""
	I1213 10:44:25.774508  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:25.774773  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:26.274739  390588 type.go:168] "Request Body" body=""
	I1213 10:44:26.274817  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:26.275144  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:26.773790  390588 type.go:168] "Request Body" body=""
	I1213 10:44:26.773863  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:26.774173  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:27.274455  390588 type.go:168] "Request Body" body=""
	I1213 10:44:27.274547  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:27.274811  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:27.774572  390588 type.go:168] "Request Body" body=""
	I1213 10:44:27.774642  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:27.774952  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:27.775003  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:28.274705  390588 type.go:168] "Request Body" body=""
	I1213 10:44:28.274777  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:28.275087  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:28.773642  390588 type.go:168] "Request Body" body=""
	I1213 10:44:28.773716  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:28.773982  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:29.273745  390588 type.go:168] "Request Body" body=""
	I1213 10:44:29.273822  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:29.274155  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:29.773835  390588 type.go:168] "Request Body" body=""
	I1213 10:44:29.773917  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:29.774248  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:30.274557  390588 type.go:168] "Request Body" body=""
	I1213 10:44:30.274641  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:30.274916  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:30.274971  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:30.774540  390588 type.go:168] "Request Body" body=""
	I1213 10:44:30.774632  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:30.774962  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:31.273679  390588 type.go:168] "Request Body" body=""
	I1213 10:44:31.273750  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:31.274077  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:31.774321  390588 type.go:168] "Request Body" body=""
	I1213 10:44:31.774386  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:31.774707  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:32.274525  390588 type.go:168] "Request Body" body=""
	I1213 10:44:32.274604  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:32.274936  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:32.274993  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:32.774698  390588 type.go:168] "Request Body" body=""
	I1213 10:44:32.774804  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:32.775108  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:33.274456  390588 type.go:168] "Request Body" body=""
	I1213 10:44:33.274529  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:33.274787  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:33.774581  390588 type.go:168] "Request Body" body=""
	I1213 10:44:33.774664  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:33.775008  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:34.274708  390588 type.go:168] "Request Body" body=""
	I1213 10:44:34.274794  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:34.275152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:34.275214  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:34.773858  390588 type.go:168] "Request Body" body=""
	I1213 10:44:34.773932  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:34.774188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:35.273780  390588 type.go:168] "Request Body" body=""
	I1213 10:44:35.273867  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:35.274233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:35.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:44:35.773852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:35.774179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:36.273930  390588 type.go:168] "Request Body" body=""
	I1213 10:44:36.274033  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:36.274307  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:36.773735  390588 type.go:168] "Request Body" body=""
	I1213 10:44:36.773807  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:36.774161  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:36.774233  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:37.273748  390588 type.go:168] "Request Body" body=""
	I1213 10:44:37.273822  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:37.274140  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:37.774404  390588 type.go:168] "Request Body" body=""
	I1213 10:44:37.774471  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:37.774822  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:38.274598  390588 type.go:168] "Request Body" body=""
	I1213 10:44:38.274669  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:38.274999  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:38.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:44:38.773807  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:38.774142  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:39.274495  390588 type.go:168] "Request Body" body=""
	I1213 10:44:39.274562  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:39.274851  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:39.274908  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:39.774657  390588 type.go:168] "Request Body" body=""
	I1213 10:44:39.774730  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:39.775049  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:40.273772  390588 type.go:168] "Request Body" body=""
	I1213 10:44:40.273847  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:40.274166  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:40.774227  390588 type.go:168] "Request Body" body=""
	I1213 10:44:40.774300  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:40.774572  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:41.274605  390588 type.go:168] "Request Body" body=""
	I1213 10:44:41.274676  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:41.275014  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:41.275084  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:41.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:44:41.773824  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:41.774152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:42.273842  390588 type.go:168] "Request Body" body=""
	I1213 10:44:42.273921  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:42.274231  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:42.773931  390588 type.go:168] "Request Body" body=""
	I1213 10:44:42.774027  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:42.774383  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:43.273973  390588 type.go:168] "Request Body" body=""
	I1213 10:44:43.274062  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:43.274409  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:43.773648  390588 type.go:168] "Request Body" body=""
	I1213 10:44:43.773733  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:43.773987  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:43.774033  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:44.273702  390588 type.go:168] "Request Body" body=""
	I1213 10:44:44.273808  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:44.274146  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:44.773881  390588 type.go:168] "Request Body" body=""
	I1213 10:44:44.773958  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:44.774291  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:45.273983  390588 type.go:168] "Request Body" body=""
	I1213 10:44:45.274063  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:45.274356  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:45.773766  390588 type.go:168] "Request Body" body=""
	I1213 10:44:45.773844  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:45.774176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:45.774231  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:46.131654  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:44:46.194295  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:46.194358  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:46.194451  390588 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:44:46.274603  390588 type.go:168] "Request Body" body=""
	I1213 10:44:46.274700  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:46.275072  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:46.774037  390588 type.go:168] "Request Body" body=""
	I1213 10:44:46.774112  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:46.774387  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:47.273782  390588 type.go:168] "Request Body" body=""
	I1213 10:44:47.273858  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:47.274208  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:47.773755  390588 type.go:168] "Request Body" body=""
	I1213 10:44:47.773830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:47.774174  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:48.273867  390588 type.go:168] "Request Body" body=""
	I1213 10:44:48.273936  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:48.274200  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:48.274241  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:48.773790  390588 type.go:168] "Request Body" body=""
	I1213 10:44:48.773871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:48.774229  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:49.273767  390588 type.go:168] "Request Body" body=""
	I1213 10:44:49.273849  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:49.274193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:49.774519  390588 type.go:168] "Request Body" body=""
	I1213 10:44:49.774595  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:49.774926  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:50.274705  390588 type.go:168] "Request Body" body=""
	I1213 10:44:50.274774  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:50.275102  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:50.275164  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:50.774065  390588 type.go:168] "Request Body" body=""
	I1213 10:44:50.774140  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:50.774471  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:51.274252  390588 type.go:168] "Request Body" body=""
	I1213 10:44:51.274326  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:51.274605  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:51.774340  390588 type.go:168] "Request Body" body=""
	I1213 10:44:51.774416  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:51.774757  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:52.274427  390588 type.go:168] "Request Body" body=""
	I1213 10:44:52.274511  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:52.274882  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:52.774600  390588 type.go:168] "Request Body" body=""
	I1213 10:44:52.774673  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:52.774919  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:52.774958  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:53.274692  390588 type.go:168] "Request Body" body=""
	I1213 10:44:53.274773  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:53.275105  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:53.773804  390588 type.go:168] "Request Body" body=""
	I1213 10:44:53.773878  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:53.774208  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:54.273740  390588 type.go:168] "Request Body" body=""
	I1213 10:44:54.273826  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:54.274090  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:54.773755  390588 type.go:168] "Request Body" body=""
	I1213 10:44:54.773834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:54.774176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:55.273871  390588 type.go:168] "Request Body" body=""
	I1213 10:44:55.273946  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:55.274266  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:55.274336  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:55.773682  390588 type.go:168] "Request Body" body=""
	I1213 10:44:55.773752  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:55.773998  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:56.273698  390588 type.go:168] "Request Body" body=""
	I1213 10:44:56.273771  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:56.274097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:56.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:44:56.773832  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:56.774157  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:57.273838  390588 type.go:168] "Request Body" body=""
	I1213 10:44:57.273924  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:57.274176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:57.773806  390588 type.go:168] "Request Body" body=""
	I1213 10:44:57.773928  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:57.774296  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:57.774354  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:58.273798  390588 type.go:168] "Request Body" body=""
	I1213 10:44:58.273873  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:58.274218  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:58.774470  390588 type.go:168] "Request Body" body=""
	I1213 10:44:58.774560  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:58.774811  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:59.179566  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:44:59.239921  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:59.239971  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:59.240057  390588 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:44:59.247585  390588 out.go:179] * Enabled addons: 
	I1213 10:44:59.249608  390588 addons.go:530] duration metric: took 1m38.479812026s for enable addons: enabled=[]
	I1213 10:44:59.274157  390588 type.go:168] "Request Body" body=""
	I1213 10:44:59.274255  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:59.274564  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:59.774339  390588 type.go:168] "Request Body" body=""
	I1213 10:44:59.774421  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:59.774764  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:59.774833  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:00.278749  390588 type.go:168] "Request Body" body=""
	I1213 10:45:00.278833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:00.279163  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:00.774212  390588 type.go:168] "Request Body" body=""
	I1213 10:45:00.774297  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:00.774688  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:01.274508  390588 type.go:168] "Request Body" body=""
	I1213 10:45:01.274605  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:01.274894  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:01.774686  390588 type.go:168] "Request Body" body=""
	I1213 10:45:01.774765  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:01.775087  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:01.775143  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:02.273808  390588 type.go:168] "Request Body" body=""
	I1213 10:45:02.273892  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:02.274240  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:02.773795  390588 type.go:168] "Request Body" body=""
	I1213 10:45:02.773879  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:02.774138  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:03.273769  390588 type.go:168] "Request Body" body=""
	I1213 10:45:03.273860  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:03.274233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:03.773792  390588 type.go:168] "Request Body" body=""
	I1213 10:45:03.773881  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:03.774233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:04.273949  390588 type.go:168] "Request Body" body=""
	I1213 10:45:04.274036  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:04.274352  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:04.274418  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:04.773787  390588 type.go:168] "Request Body" body=""
	I1213 10:45:04.773869  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:04.774175  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:05.273787  390588 type.go:168] "Request Body" body=""
	I1213 10:45:05.273859  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:05.274192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:05.773881  390588 type.go:168] "Request Body" body=""
	I1213 10:45:05.773957  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:05.774210  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:06.273726  390588 type.go:168] "Request Body" body=""
	I1213 10:45:06.273802  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:06.274127  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:06.773770  390588 type.go:168] "Request Body" body=""
	I1213 10:45:06.773852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:06.774202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:06.774260  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:07.273760  390588 type.go:168] "Request Body" body=""
	I1213 10:45:07.273836  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:07.274400  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:07.773790  390588 type.go:168] "Request Body" body=""
	I1213 10:45:07.773866  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:07.774207  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:08.273795  390588 type.go:168] "Request Body" body=""
	I1213 10:45:08.273920  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:08.274303  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:08.773655  390588 type.go:168] "Request Body" body=""
	I1213 10:45:08.773725  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:08.773989  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:09.273678  390588 type.go:168] "Request Body" body=""
	I1213 10:45:09.273758  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:09.274098  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:09.274153  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:09.773807  390588 type.go:168] "Request Body" body=""
	I1213 10:45:09.773902  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:09.774222  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:10.273946  390588 type.go:168] "Request Body" body=""
	I1213 10:45:10.274017  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:10.274269  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:10.774276  390588 type.go:168] "Request Body" body=""
	I1213 10:45:10.774349  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:10.774733  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:11.274712  390588 type.go:168] "Request Body" body=""
	I1213 10:45:11.274783  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:11.275094  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:11.275143  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:11.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:45:11.773801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:11.774126  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:12.273826  390588 type.go:168] "Request Body" body=""
	I1213 10:45:12.273930  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:12.274257  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:12.773940  390588 type.go:168] "Request Body" body=""
	I1213 10:45:12.774025  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:12.774370  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:13.273711  390588 type.go:168] "Request Body" body=""
	I1213 10:45:13.273799  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:13.274065  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:13.773788  390588 type.go:168] "Request Body" body=""
	I1213 10:45:13.773869  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:13.774187  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:13.774240  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:14.273793  390588 type.go:168] "Request Body" body=""
	I1213 10:45:14.273953  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:14.274293  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:14.773991  390588 type.go:168] "Request Body" body=""
	I1213 10:45:14.774073  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:14.774396  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:15.273772  390588 type.go:168] "Request Body" body=""
	I1213 10:45:15.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:15.274164  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:15.773820  390588 type.go:168] "Request Body" body=""
	I1213 10:45:15.773895  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:15.774219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:15.774280  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:16.274172  390588 type.go:168] "Request Body" body=""
	I1213 10:45:16.274247  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:16.280111  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1213 10:45:16.773739  390588 type.go:168] "Request Body" body=""
	I1213 10:45:16.773818  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:16.774141  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:17.273780  390588 type.go:168] "Request Body" body=""
	I1213 10:45:17.273862  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:17.274194  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:17.773721  390588 type.go:168] "Request Body" body=""
	I1213 10:45:17.773798  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:17.774048  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:18.273782  390588 type.go:168] "Request Body" body=""
	I1213 10:45:18.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:18.274213  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:18.274286  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:18.773986  390588 type.go:168] "Request Body" body=""
	I1213 10:45:18.774078  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:18.774398  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:19.273725  390588 type.go:168] "Request Body" body=""
	I1213 10:45:19.273802  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:19.274082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:19.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:45:19.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:19.774130  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:20.274061  390588 type.go:168] "Request Body" body=""
	I1213 10:45:20.274147  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:20.274521  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:20.274567  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:20.774429  390588 type.go:168] "Request Body" body=""
	I1213 10:45:20.774513  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:20.774784  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:21.274708  390588 type.go:168] "Request Body" body=""
	I1213 10:45:21.274788  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:21.275140  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:21.773809  390588 type.go:168] "Request Body" body=""
	I1213 10:45:21.773886  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:21.774230  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:22.273923  390588 type.go:168] "Request Body" body=""
	I1213 10:45:22.273995  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:22.274330  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:22.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:45:22.773836  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:22.774196  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:22.774266  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:23.273752  390588 type.go:168] "Request Body" body=""
	I1213 10:45:23.273825  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:23.274153  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:23.773854  390588 type.go:168] "Request Body" body=""
	I1213 10:45:23.773925  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:23.774184  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:24.273760  390588 type.go:168] "Request Body" body=""
	I1213 10:45:24.273837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:24.274228  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:24.773775  390588 type.go:168] "Request Body" body=""
	I1213 10:45:24.773852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:24.774188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:25.273932  390588 type.go:168] "Request Body" body=""
	I1213 10:45:25.274007  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:25.274270  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:25.274311  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:25.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:45:25.773835  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:25.774178  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:26.273929  390588 type.go:168] "Request Body" body=""
	I1213 10:45:26.274023  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:26.274342  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:26.774676  390588 type.go:168] "Request Body" body=""
	I1213 10:45:26.774744  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:26.774995  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:27.273699  390588 type.go:168] "Request Body" body=""
	I1213 10:45:27.273783  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:27.274109  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:27.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:45:27.773826  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:27.774163  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:27.774227  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:28.273715  390588 type.go:168] "Request Body" body=""
	I1213 10:45:28.273788  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:28.274057  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:28.773741  390588 type.go:168] "Request Body" body=""
	I1213 10:45:28.773816  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:28.774148  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:29.273858  390588 type.go:168] "Request Body" body=""
	I1213 10:45:29.273934  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:29.274250  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:29.773725  390588 type.go:168] "Request Body" body=""
	I1213 10:45:29.773794  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:29.774055  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:30.273773  390588 type.go:168] "Request Body" body=""
	I1213 10:45:30.273852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:30.274199  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:30.274260  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:30.774238  390588 type.go:168] "Request Body" body=""
	I1213 10:45:30.774312  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:30.774643  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:31.274550  390588 type.go:168] "Request Body" body=""
	I1213 10:45:31.274624  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:31.274882  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:31.774665  390588 type.go:168] "Request Body" body=""
	I1213 10:45:31.774738  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:31.775064  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:32.273753  390588 type.go:168] "Request Body" body=""
	I1213 10:45:32.273830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:32.274149  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:32.773762  390588 type.go:168] "Request Body" body=""
	I1213 10:45:32.773830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:32.774109  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:32.774151  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:33.273762  390588 type.go:168] "Request Body" body=""
	I1213 10:45:33.273841  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:33.274135  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:33.773816  390588 type.go:168] "Request Body" body=""
	I1213 10:45:33.773892  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:33.774227  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:34.274572  390588 type.go:168] "Request Body" body=""
	I1213 10:45:34.274643  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:34.274903  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:34.774657  390588 type.go:168] "Request Body" body=""
	I1213 10:45:34.774729  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:34.775082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:34.775152  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:35.273670  390588 type.go:168] "Request Body" body=""
	I1213 10:45:35.273759  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:35.274117  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:35.774407  390588 type.go:168] "Request Body" body=""
	I1213 10:45:35.774479  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:35.774771  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:36.274663  390588 type.go:168] "Request Body" body=""
	I1213 10:45:36.274756  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:36.275065  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:36.773806  390588 type.go:168] "Request Body" body=""
	I1213 10:45:36.773912  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:36.774265  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:37.273706  390588 type.go:168] "Request Body" body=""
	I1213 10:45:37.273778  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:37.274054  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:37.274104  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:37.773740  390588 type.go:168] "Request Body" body=""
	I1213 10:45:37.773842  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:37.774182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:38.273888  390588 type.go:168] "Request Body" body=""
	I1213 10:45:38.273961  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:38.274293  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:38.773975  390588 type.go:168] "Request Body" body=""
	I1213 10:45:38.774042  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:38.774302  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:39.273778  390588 type.go:168] "Request Body" body=""
	I1213 10:45:39.273861  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:39.274199  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:39.274262  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:39.773743  390588 type.go:168] "Request Body" body=""
	I1213 10:45:39.773824  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:39.774184  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:40.273728  390588 type.go:168] "Request Body" body=""
	I1213 10:45:40.273827  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:40.274144  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:40.774643  390588 type.go:168] "Request Body" body=""
	I1213 10:45:40.774717  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:40.775033  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:41.273691  390588 type.go:168] "Request Body" body=""
	I1213 10:45:41.273765  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:41.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:41.774405  390588 type.go:168] "Request Body" body=""
	I1213 10:45:41.774475  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:41.774789  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:41.774848  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:42.274590  390588 type.go:168] "Request Body" body=""
	I1213 10:45:42.274665  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:42.275006  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:42.773699  390588 type.go:168] "Request Body" body=""
	I1213 10:45:42.773775  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:42.774116  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:43.274417  390588 type.go:168] "Request Body" body=""
	I1213 10:45:43.274505  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:43.274764  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:43.774491  390588 type.go:168] "Request Body" body=""
	I1213 10:45:43.774561  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:43.774931  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:43.774985  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:44.274631  390588 type.go:168] "Request Body" body=""
	I1213 10:45:44.274716  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:44.275082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:44.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:45:44.773832  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:44.774086  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:45.273789  390588 type.go:168] "Request Body" body=""
	I1213 10:45:45.273877  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:45.274215  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:45.773938  390588 type.go:168] "Request Body" body=""
	I1213 10:45:45.774016  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:45.774370  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:46.274211  390588 type.go:168] "Request Body" body=""
	I1213 10:45:46.274311  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:46.274593  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:46.274641  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:46.774347  390588 type.go:168] "Request Body" body=""
	I1213 10:45:46.774423  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:46.774786  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:47.274591  390588 type.go:168] "Request Body" body=""
	I1213 10:45:47.274695  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:47.275064  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:47.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:45:47.773821  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:47.774076  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:48.273791  390588 type.go:168] "Request Body" body=""
	I1213 10:45:48.273871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:48.274221  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:48.773944  390588 type.go:168] "Request Body" body=""
	I1213 10:45:48.774025  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:48.774340  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:48.774398  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:49.273717  390588 type.go:168] "Request Body" body=""
	I1213 10:45:49.273796  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:49.274115  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:49.773760  390588 type.go:168] "Request Body" body=""
	I1213 10:45:49.773837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:49.774152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:50.273796  390588 type.go:168] "Request Body" body=""
	I1213 10:45:50.273881  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:50.274202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:50.774153  390588 type.go:168] "Request Body" body=""
	I1213 10:45:50.774227  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:50.774498  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:50.774547  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:51.274578  390588 type.go:168] "Request Body" body=""
	I1213 10:45:51.274657  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:51.274980  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:51.773696  390588 type.go:168] "Request Body" body=""
	I1213 10:45:51.773772  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:51.774097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:52.273712  390588 type.go:168] "Request Body" body=""
	I1213 10:45:52.273783  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:52.274044  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:52.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:45:52.773841  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:52.774214  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:53.273940  390588 type.go:168] "Request Body" body=""
	I1213 10:45:53.274028  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:53.274362  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:53.274420  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:53.773716  390588 type.go:168] "Request Body" body=""
	I1213 10:45:53.773788  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:53.774109  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:54.273804  390588 type.go:168] "Request Body" body=""
	I1213 10:45:54.273880  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:54.274211  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:54.773918  390588 type.go:168] "Request Body" body=""
	I1213 10:45:54.773996  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:54.774325  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:55.273749  390588 type.go:168] "Request Body" body=""
	I1213 10:45:55.273858  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:55.274197  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:55.773750  390588 type.go:168] "Request Body" body=""
	I1213 10:45:55.773829  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:55.774176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:55.774229  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:56.273954  390588 type.go:168] "Request Body" body=""
	I1213 10:45:56.274030  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:56.274368  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:56.774597  390588 type.go:168] "Request Body" body=""
	I1213 10:45:56.774681  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:56.775019  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:57.273757  390588 type.go:168] "Request Body" body=""
	I1213 10:45:57.273833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:57.274167  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:57.773886  390588 type.go:168] "Request Body" body=""
	I1213 10:45:57.773969  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:57.774297  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:57.774351  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:58.274008  390588 type.go:168] "Request Body" body=""
	I1213 10:45:58.274074  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:58.274328  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:58.773754  390588 type.go:168] "Request Body" body=""
	I1213 10:45:58.773845  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:58.774179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:59.273755  390588 type.go:168] "Request Body" body=""
	I1213 10:45:59.273831  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:59.274152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:59.773661  390588 type.go:168] "Request Body" body=""
	I1213 10:45:59.773729  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:59.773978  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:00.273779  390588 type.go:168] "Request Body" body=""
	I1213 10:46:00.273870  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:00.274207  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:00.274265  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:00.774194  390588 type.go:168] "Request Body" body=""
	I1213 10:46:00.774271  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:00.774577  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:01.274425  390588 type.go:168] "Request Body" body=""
	I1213 10:46:01.274499  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:01.274770  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:01.774648  390588 type.go:168] "Request Body" body=""
	I1213 10:46:01.774734  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:01.775108  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:02.273787  390588 type.go:168] "Request Body" body=""
	I1213 10:46:02.273866  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:02.274202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:02.773686  390588 type.go:168] "Request Body" body=""
	I1213 10:46:02.773753  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:02.774020  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:02.774062  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:03.273812  390588 type.go:168] "Request Body" body=""
	I1213 10:46:03.273890  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:03.274214  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:03.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:46:03.773844  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:03.774182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:04.274309  390588 type.go:168] "Request Body" body=""
	I1213 10:46:04.274379  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:04.274657  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:04.774430  390588 type.go:168] "Request Body" body=""
	I1213 10:46:04.774509  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:04.774864  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:04.774924  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:05.274540  390588 type.go:168] "Request Body" body=""
	I1213 10:46:05.274616  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:05.274963  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:05.773676  390588 type.go:168] "Request Body" body=""
	I1213 10:46:05.773758  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:05.774085  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:06.273969  390588 type.go:168] "Request Body" body=""
	I1213 10:46:06.274052  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:06.274459  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:06.773811  390588 type.go:168] "Request Body" body=""
	I1213 10:46:06.773902  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:06.774273  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:07.274619  390588 type.go:168] "Request Body" body=""
	I1213 10:46:07.274708  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:07.274974  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:07.275017  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:07.773671  390588 type.go:168] "Request Body" body=""
	I1213 10:46:07.773768  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:07.774117  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:08.273847  390588 type.go:168] "Request Body" body=""
	I1213 10:46:08.273925  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:08.274261  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:08.773957  390588 type.go:168] "Request Body" body=""
	I1213 10:46:08.774035  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:08.774397  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:09.273804  390588 type.go:168] "Request Body" body=""
	I1213 10:46:09.273894  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:09.274256  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:09.773968  390588 type.go:168] "Request Body" body=""
	I1213 10:46:09.774044  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:09.774403  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:09.774460  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:10.273719  390588 type.go:168] "Request Body" body=""
	I1213 10:46:10.273805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:10.274080  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:10.774136  390588 type.go:168] "Request Body" body=""
	I1213 10:46:10.774210  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:10.774536  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:11.274519  390588 type.go:168] "Request Body" body=""
	I1213 10:46:11.274594  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:11.274918  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:11.774397  390588 type.go:168] "Request Body" body=""
	I1213 10:46:11.774468  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:11.774832  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:11.774891  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:12.274659  390588 type.go:168] "Request Body" body=""
	I1213 10:46:12.274757  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:12.275082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:12.773782  390588 type.go:168] "Request Body" body=""
	I1213 10:46:12.773863  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:12.774233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:13.273921  390588 type.go:168] "Request Body" body=""
	I1213 10:46:13.273994  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:13.274258  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:13.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:46:13.773843  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:13.774234  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:14.273963  390588 type.go:168] "Request Body" body=""
	I1213 10:46:14.274066  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:14.274415  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:14.274474  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:14.773715  390588 type.go:168] "Request Body" body=""
	I1213 10:46:14.773793  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:14.774125  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:15.273806  390588 type.go:168] "Request Body" body=""
	I1213 10:46:15.273885  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:15.274220  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:15.773837  390588 type.go:168] "Request Body" body=""
	I1213 10:46:15.773921  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:15.774333  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:16.274096  390588 type.go:168] "Request Body" body=""
	I1213 10:46:16.274165  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:16.274517  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:16.274565  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:16.774276  390588 type.go:168] "Request Body" body=""
	I1213 10:46:16.774356  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:16.774701  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:17.274489  390588 type.go:168] "Request Body" body=""
	I1213 10:46:17.274563  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:17.274929  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:17.773641  390588 type.go:168] "Request Body" body=""
	I1213 10:46:17.773710  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:17.773957  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:18.274732  390588 type.go:168] "Request Body" body=""
	I1213 10:46:18.274812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:18.275153  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:18.275207  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:18.773906  390588 type.go:168] "Request Body" body=""
	I1213 10:46:18.773982  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:18.774326  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:19.274430  390588 type.go:168] "Request Body" body=""
	I1213 10:46:19.274528  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:19.274794  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:19.774601  390588 type.go:168] "Request Body" body=""
	I1213 10:46:19.774671  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:19.775003  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:20.273724  390588 type.go:168] "Request Body" body=""
	I1213 10:46:20.273806  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:20.274129  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:20.774125  390588 type.go:168] "Request Body" body=""
	I1213 10:46:20.774196  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:20.774577  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:20.774628  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:21.274424  390588 type.go:168] "Request Body" body=""
	I1213 10:46:21.274514  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:21.274834  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:21.774531  390588 type.go:168] "Request Body" body=""
	I1213 10:46:21.774612  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:21.774944  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:22.274640  390588 type.go:168] "Request Body" body=""
	I1213 10:46:22.274709  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:22.275021  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:22.774663  390588 type.go:168] "Request Body" body=""
	I1213 10:46:22.774773  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:22.775134  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:22.775197  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:23.273890  390588 type.go:168] "Request Body" body=""
	I1213 10:46:23.273971  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:23.274309  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:23.773717  390588 type.go:168] "Request Body" body=""
	I1213 10:46:23.773786  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:23.774083  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:24.273734  390588 type.go:168] "Request Body" body=""
	I1213 10:46:24.273813  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:24.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:24.773781  390588 type.go:168] "Request Body" body=""
	I1213 10:46:24.773855  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:24.774193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:25.274593  390588 type.go:168] "Request Body" body=""
	I1213 10:46:25.274667  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:25.274932  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:25.274974  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:25.773688  390588 type.go:168] "Request Body" body=""
	I1213 10:46:25.773769  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:25.774103  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:26.273715  390588 type.go:168] "Request Body" body=""
	I1213 10:46:26.273799  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:26.274187  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:26.773723  390588 type.go:168] "Request Body" body=""
	I1213 10:46:26.773803  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:26.774134  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:27.273777  390588 type.go:168] "Request Body" body=""
	I1213 10:46:27.273856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:27.274211  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:27.773942  390588 type.go:168] "Request Body" body=""
	I1213 10:46:27.774024  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:27.774376  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:27.774430  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:28.274709  390588 type.go:168] "Request Body" body=""
	I1213 10:46:28.274789  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:28.275064  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:28.773835  390588 type.go:168] "Request Body" body=""
	I1213 10:46:28.773920  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:28.774272  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:29.273759  390588 type.go:168] "Request Body" body=""
	I1213 10:46:29.273840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:29.274176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:29.774348  390588 type.go:168] "Request Body" body=""
	I1213 10:46:29.774419  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:29.774764  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:29.774820  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:30.274620  390588 type.go:168] "Request Body" body=""
	I1213 10:46:30.274696  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:30.275046  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:30.774640  390588 type.go:168] "Request Body" body=""
	I1213 10:46:30.774719  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:30.775077  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:31.273951  390588 type.go:168] "Request Body" body=""
	I1213 10:46:31.274026  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:31.274287  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:31.773775  390588 type.go:168] "Request Body" body=""
	I1213 10:46:31.773856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:31.774181  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:32.273795  390588 type.go:168] "Request Body" body=""
	I1213 10:46:32.273869  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:32.274211  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:32.274272  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:32.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:46:32.773801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:32.774050  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:33.273763  390588 type.go:168] "Request Body" body=""
	I1213 10:46:33.273841  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:33.274191  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:33.773932  390588 type.go:168] "Request Body" body=""
	I1213 10:46:33.774017  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:33.774448  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:34.273707  390588 type.go:168] "Request Body" body=""
	I1213 10:46:34.273777  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:34.274033  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:34.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:46:34.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:34.774164  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:34.774219  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:35.273760  390588 type.go:168] "Request Body" body=""
	I1213 10:46:35.273839  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:35.274188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:35.773757  390588 type.go:168] "Request Body" body=""
	I1213 10:46:35.773834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:35.774091  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:36.273704  390588 type.go:168] "Request Body" body=""
	I1213 10:46:36.273807  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:36.274146  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:36.773734  390588 type.go:168] "Request Body" body=""
	I1213 10:46:36.773812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:36.774138  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:37.273719  390588 type.go:168] "Request Body" body=""
	I1213 10:46:37.273806  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:37.274055  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:37.274109  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:37.773773  390588 type.go:168] "Request Body" body=""
	I1213 10:46:37.773850  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:37.774167  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:38.273869  390588 type.go:168] "Request Body" body=""
	I1213 10:46:38.273941  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:38.274257  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:38.774621  390588 type.go:168] "Request Body" body=""
	I1213 10:46:38.774711  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:38.774971  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:39.273720  390588 type.go:168] "Request Body" body=""
	I1213 10:46:39.273795  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:39.274130  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:39.274185  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:39.773882  390588 type.go:168] "Request Body" body=""
	I1213 10:46:39.773961  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:39.774280  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:40.273738  390588 type.go:168] "Request Body" body=""
	I1213 10:46:40.273832  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:40.274158  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:40.774749  390588 type.go:168] "Request Body" body=""
	I1213 10:46:40.774834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:40.775222  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:41.273940  390588 type.go:168] "Request Body" body=""
	I1213 10:46:41.274026  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:41.274347  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:41.274405  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:41.774636  390588 type.go:168] "Request Body" body=""
	I1213 10:46:41.774701  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:41.774952  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:42.273730  390588 type.go:168] "Request Body" body=""
	I1213 10:46:42.273828  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:42.274210  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:42.773953  390588 type.go:168] "Request Body" body=""
	I1213 10:46:42.774038  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:42.774405  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:43.274638  390588 type.go:168] "Request Body" body=""
	I1213 10:46:43.274705  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:43.274978  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:43.275016  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:43.773701  390588 type.go:168] "Request Body" body=""
	I1213 10:46:43.773806  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:43.774143  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:44.273888  390588 type.go:168] "Request Body" body=""
	I1213 10:46:44.273989  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:44.274363  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:44.774070  390588 type.go:168] "Request Body" body=""
	I1213 10:46:44.774138  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:44.774399  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:45.273823  390588 type.go:168] "Request Body" body=""
	I1213 10:46:45.273898  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:45.274268  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:45.773995  390588 type.go:168] "Request Body" body=""
	I1213 10:46:45.774070  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:45.774394  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:45.774448  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:46.274246  390588 type.go:168] "Request Body" body=""
	I1213 10:46:46.274313  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:46.274596  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:46.774345  390588 type.go:168] "Request Body" body=""
	I1213 10:46:46.774417  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:46.774765  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:47.274423  390588 type.go:168] "Request Body" body=""
	I1213 10:46:47.274522  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:47.274846  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:47.774170  390588 type.go:168] "Request Body" body=""
	I1213 10:46:47.774241  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:47.774544  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:47.774600  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:48.274170  390588 type.go:168] "Request Body" body=""
	I1213 10:46:48.274257  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:48.274614  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:48.774460  390588 type.go:168] "Request Body" body=""
	I1213 10:46:48.774547  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:48.774903  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:49.274601  390588 type.go:168] "Request Body" body=""
	I1213 10:46:49.274681  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:49.274964  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:49.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:46:49.773817  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:49.774156  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:50.273855  390588 type.go:168] "Request Body" body=""
	I1213 10:46:50.273935  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:50.274285  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:50.274341  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:50.774135  390588 type.go:168] "Request Body" body=""
	I1213 10:46:50.774202  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:50.774454  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:51.274467  390588 type.go:168] "Request Body" body=""
	I1213 10:46:51.274552  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:51.274884  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:51.774669  390588 type.go:168] "Request Body" body=""
	I1213 10:46:51.774754  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:51.775052  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:52.273723  390588 type.go:168] "Request Body" body=""
	I1213 10:46:52.273794  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:52.274094  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:52.773761  390588 type.go:168] "Request Body" body=""
	I1213 10:46:52.773837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:52.774189  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:52.774245  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:53.273910  390588 type.go:168] "Request Body" body=""
	I1213 10:46:53.273985  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:53.274313  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:53.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:46:53.773801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:53.774114  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:54.273799  390588 type.go:168] "Request Body" body=""
	I1213 10:46:54.273883  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:54.274242  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:54.773831  390588 type.go:168] "Request Body" body=""
	I1213 10:46:54.773908  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:54.774273  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:54.774330  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:55.273935  390588 type.go:168] "Request Body" body=""
	I1213 10:46:55.274002  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:55.274280  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:55.773763  390588 type.go:168] "Request Body" body=""
	I1213 10:46:55.773841  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:55.774166  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:56.273719  390588 type.go:168] "Request Body" body=""
	I1213 10:46:56.273793  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:56.274128  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:56.774284  390588 type.go:168] "Request Body" body=""
	I1213 10:46:56.774353  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:56.774609  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:56.774649  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:57.274349  390588 type.go:168] "Request Body" body=""
	I1213 10:46:57.274429  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:57.274756  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:57.774568  390588 type.go:168] "Request Body" body=""
	I1213 10:46:57.774644  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:57.774981  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:58.274491  390588 type.go:168] "Request Body" body=""
	I1213 10:46:58.274570  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:58.274873  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:58.774677  390588 type.go:168] "Request Body" body=""
	I1213 10:46:58.774750  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:58.775093  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:58.775146  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:59.273671  390588 type.go:168] "Request Body" body=""
	I1213 10:46:59.273746  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:59.274092  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:59.773709  390588 type.go:168] "Request Body" body=""
	I1213 10:46:59.773787  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:59.774109  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:00.273858  390588 type.go:168] "Request Body" body=""
	I1213 10:47:00.273965  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:00.274284  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:00.774431  390588 type.go:168] "Request Body" body=""
	I1213 10:47:00.774530  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:00.774877  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:01.273680  390588 type.go:168] "Request Body" body=""
	I1213 10:47:01.273746  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:01.274056  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:01.274104  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:01.773802  390588 type.go:168] "Request Body" body=""
	I1213 10:47:01.773895  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:01.774231  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:02.273805  390588 type.go:168] "Request Body" body=""
	I1213 10:47:02.273883  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:02.274188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:02.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:47:02.773820  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:02.774149  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:03.273795  390588 type.go:168] "Request Body" body=""
	I1213 10:47:03.273876  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:03.274215  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:03.274268  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:03.773789  390588 type.go:168] "Request Body" body=""
	I1213 10:47:03.773879  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:03.774219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:04.274436  390588 type.go:168] "Request Body" body=""
	I1213 10:47:04.274533  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:04.274808  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:04.774597  390588 type.go:168] "Request Body" body=""
	I1213 10:47:04.774676  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:04.775027  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:05.273736  390588 type.go:168] "Request Body" body=""
	I1213 10:47:05.273815  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:05.274179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:05.773856  390588 type.go:168] "Request Body" body=""
	I1213 10:47:05.773934  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:05.774190  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:05.774242  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:06.273720  390588 type.go:168] "Request Body" body=""
	I1213 10:47:06.273796  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:06.274139  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:06.773856  390588 type.go:168] "Request Body" body=""
	I1213 10:47:06.773936  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:06.774268  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:07.274469  390588 type.go:168] "Request Body" body=""
	I1213 10:47:07.274550  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:07.274856  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:07.774641  390588 type.go:168] "Request Body" body=""
	I1213 10:47:07.774724  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:07.775047  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:07.775098  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:08.273769  390588 type.go:168] "Request Body" body=""
	I1213 10:47:08.273853  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:08.274179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:08.773674  390588 type.go:168] "Request Body" body=""
	I1213 10:47:08.773747  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:08.773993  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:09.273756  390588 type.go:168] "Request Body" body=""
	I1213 10:47:09.273885  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:09.274246  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:09.773763  390588 type.go:168] "Request Body" body=""
	I1213 10:47:09.773845  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:09.774186  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:10.274330  390588 type.go:168] "Request Body" body=""
	I1213 10:47:10.274409  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:10.274689  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:10.274730  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:10.774642  390588 type.go:168] "Request Body" body=""
	I1213 10:47:10.774724  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:10.775070  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:11.273743  390588 type.go:168] "Request Body" body=""
	I1213 10:47:11.273826  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:11.274166  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:11.773673  390588 type.go:168] "Request Body" body=""
	I1213 10:47:11.773751  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:11.774001  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:12.273773  390588 type.go:168] "Request Body" body=""
	I1213 10:47:12.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:12.274233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:12.773795  390588 type.go:168] "Request Body" body=""
	I1213 10:47:12.773878  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:12.774221  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:12.774276  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:13.273922  390588 type.go:168] "Request Body" body=""
	I1213 10:47:13.273993  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:13.274301  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:13.773767  390588 type.go:168] "Request Body" body=""
	I1213 10:47:13.773837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:13.774158  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:14.273877  390588 type.go:168] "Request Body" body=""
	I1213 10:47:14.273952  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:14.274297  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:14.773969  390588 type.go:168] "Request Body" body=""
	I1213 10:47:14.774038  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:14.774294  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:14.774335  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:15.273792  390588 type.go:168] "Request Body" body=""
	I1213 10:47:15.273867  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:15.274192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:15.773783  390588 type.go:168] "Request Body" body=""
	I1213 10:47:15.773859  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:15.774205  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:16.273875  390588 type.go:168] "Request Body" body=""
	I1213 10:47:16.273951  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:16.274219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:16.773783  390588 type.go:168] "Request Body" body=""
	I1213 10:47:16.773856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:16.775023  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1213 10:47:16.775086  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:17.273732  390588 type.go:168] "Request Body" body=""
	I1213 10:47:17.273805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:17.274097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:17.773664  390588 type.go:168] "Request Body" body=""
	I1213 10:47:17.773749  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:17.774040  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:18.273790  390588 type.go:168] "Request Body" body=""
	I1213 10:47:18.273880  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:18.274223  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:18.773754  390588 type.go:168] "Request Body" body=""
	I1213 10:47:18.773831  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:18.774146  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:19.273714  390588 type.go:168] "Request Body" body=""
	I1213 10:47:19.273784  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:19.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:19.274151  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:19.773784  390588 type.go:168] "Request Body" body=""
	I1213 10:47:19.773873  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:19.774244  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:20.273959  390588 type.go:168] "Request Body" body=""
	I1213 10:47:20.274044  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:20.274394  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:20.774250  390588 type.go:168] "Request Body" body=""
	I1213 10:47:20.774369  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:20.774676  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:21.274708  390588 type.go:168] "Request Body" body=""
	I1213 10:47:21.274781  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:21.275080  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:21.275128  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:21.773729  390588 type.go:168] "Request Body" body=""
	I1213 10:47:21.773812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:21.774174  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:22.273722  390588 type.go:168] "Request Body" body=""
	I1213 10:47:22.273821  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:22.274131  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:22.773835  390588 type.go:168] "Request Body" body=""
	I1213 10:47:22.773910  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:22.774224  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:23.273772  390588 type.go:168] "Request Body" body=""
	I1213 10:47:23.273864  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:23.274153  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:23.774583  390588 type.go:168] "Request Body" body=""
	I1213 10:47:23.774658  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:23.774922  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:23.774974  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:24.274727  390588 type.go:168] "Request Body" body=""
	I1213 10:47:24.274797  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:24.275112  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:24.773773  390588 type.go:168] "Request Body" body=""
	I1213 10:47:24.773868  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:24.774190  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:25.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:47:25.273794  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:25.274148  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:25.773763  390588 type.go:168] "Request Body" body=""
	I1213 10:47:25.773845  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:25.774201  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:26.273894  390588 type.go:168] "Request Body" body=""
	I1213 10:47:26.273970  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:26.274304  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:26.274358  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:26.773709  390588 type.go:168] "Request Body" body=""
	I1213 10:47:26.773784  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:26.774082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:27.273776  390588 type.go:168] "Request Body" body=""
	I1213 10:47:27.273856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:27.274198  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:27.773769  390588 type.go:168] "Request Body" body=""
	I1213 10:47:27.773862  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:27.774181  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:28.273908  390588 type.go:168] "Request Body" body=""
	I1213 10:47:28.273980  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:28.274246  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:28.773791  390588 type.go:168] "Request Body" body=""
	I1213 10:47:28.773871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:28.774221  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:28.774280  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:29.273783  390588 type.go:168] "Request Body" body=""
	I1213 10:47:29.273866  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:29.274195  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:29.773879  390588 type.go:168] "Request Body" body=""
	I1213 10:47:29.773954  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:29.774220  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:30.273792  390588 type.go:168] "Request Body" body=""
	I1213 10:47:30.273887  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:30.274239  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:30.774640  390588 type.go:168] "Request Body" body=""
	I1213 10:47:30.774719  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:30.775063  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:30.775117  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:31.273664  390588 type.go:168] "Request Body" body=""
	I1213 10:47:31.273730  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:31.273976  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:31.773680  390588 type.go:168] "Request Body" body=""
	I1213 10:47:31.773753  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:31.774074  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:32.273770  390588 type.go:168] "Request Body" body=""
	I1213 10:47:32.273856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:32.274200  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:32.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:47:32.773840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:32.774155  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:33.273743  390588 type.go:168] "Request Body" body=""
	I1213 10:47:33.273816  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:33.274165  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:33.274237  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:33.773778  390588 type.go:168] "Request Body" body=""
	I1213 10:47:33.773853  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:33.774193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:34.273877  390588 type.go:168] "Request Body" body=""
	I1213 10:47:34.273952  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:34.274209  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:34.773757  390588 type.go:168] "Request Body" body=""
	I1213 10:47:34.773829  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:34.774154  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:35.273734  390588 type.go:168] "Request Body" body=""
	I1213 10:47:35.273810  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:35.274170  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:35.773845  390588 type.go:168] "Request Body" body=""
	I1213 10:47:35.773920  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:35.774173  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:35.774222  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:36.273675  390588 type.go:168] "Request Body" body=""
	I1213 10:47:36.273750  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:36.274088  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:36.773810  390588 type.go:168] "Request Body" body=""
	I1213 10:47:36.773886  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:36.774215  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:37.273714  390588 type.go:168] "Request Body" body=""
	I1213 10:47:37.273797  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:37.274138  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:37.773767  390588 type.go:168] "Request Body" body=""
	I1213 10:47:37.773861  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:37.774225  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:37.774283  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:38.273949  390588 type.go:168] "Request Body" body=""
	I1213 10:47:38.274035  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:38.274379  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:38.774693  390588 type.go:168] "Request Body" body=""
	I1213 10:47:38.774771  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:38.775056  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:39.273772  390588 type.go:168] "Request Body" body=""
	I1213 10:47:39.273858  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:39.274236  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:39.773832  390588 type.go:168] "Request Body" body=""
	I1213 10:47:39.773906  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:39.774253  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:39.774308  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:40.274521  390588 type.go:168] "Request Body" body=""
	I1213 10:47:40.274596  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:40.274862  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:40.774685  390588 type.go:168] "Request Body" body=""
	I1213 10:47:40.774759  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:40.775099  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:41.273778  390588 type.go:168] "Request Body" body=""
	I1213 10:47:41.273854  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:41.274171  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:41.773727  390588 type.go:168] "Request Body" body=""
	I1213 10:47:41.773800  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:41.774113  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:42.273838  390588 type.go:168] "Request Body" body=""
	I1213 10:47:42.273925  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:42.274281  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:42.274339  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:42.773878  390588 type.go:168] "Request Body" body=""
	I1213 10:47:42.773968  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:42.774283  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:43.273946  390588 type.go:168] "Request Body" body=""
	I1213 10:47:43.274019  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:43.274334  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:43.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:47:43.773829  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:43.774150  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:44.273757  390588 type.go:168] "Request Body" body=""
	I1213 10:47:44.273838  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:44.274183  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:44.773782  390588 type.go:168] "Request Body" body=""
	I1213 10:47:44.773864  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:44.774198  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:44.774253  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:45.273924  390588 type.go:168] "Request Body" body=""
	I1213 10:47:45.274004  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:45.274419  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:45.773843  390588 type.go:168] "Request Body" body=""
	I1213 10:47:45.773923  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:45.774295  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:46.273961  390588 type.go:168] "Request Body" body=""
	I1213 10:47:46.274029  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:46.274287  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:46.773789  390588 type.go:168] "Request Body" body=""
	I1213 10:47:46.773869  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:46.774227  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:46.774283  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:47.273961  390588 type.go:168] "Request Body" body=""
	I1213 10:47:47.274043  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:47.274393  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:47.773713  390588 type.go:168] "Request Body" body=""
	I1213 10:47:47.773795  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:47.774076  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:48.273777  390588 type.go:168] "Request Body" body=""
	I1213 10:47:48.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:48.274213  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:48.773914  390588 type.go:168] "Request Body" body=""
	I1213 10:47:48.773990  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:48.774305  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:48.774364  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:49.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:47:49.273791  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:49.274082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:49.773785  390588 type.go:168] "Request Body" body=""
	I1213 10:47:49.773866  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:49.774184  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:50.273769  390588 type.go:168] "Request Body" body=""
	I1213 10:47:50.273849  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:50.274190  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:50.774233  390588 type.go:168] "Request Body" body=""
	I1213 10:47:50.774309  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:50.774588  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:50.774631  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:51.274650  390588 type.go:168] "Request Body" body=""
	I1213 10:47:51.274724  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:51.275059  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:51.773796  390588 type.go:168] "Request Body" body=""
	I1213 10:47:51.773878  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:51.774236  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:52.274456  390588 type.go:168] "Request Body" body=""
	I1213 10:47:52.274538  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:52.274799  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:52.774588  390588 type.go:168] "Request Body" body=""
	I1213 10:47:52.774666  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:52.775007  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:52.775061  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:53.273753  390588 type.go:168] "Request Body" body=""
	I1213 10:47:53.273833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:53.274191  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:53.773675  390588 type.go:168] "Request Body" body=""
	I1213 10:47:53.773745  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:53.774008  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:54.273722  390588 type.go:168] "Request Body" body=""
	I1213 10:47:54.273801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:54.274131  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:54.773868  390588 type.go:168] "Request Body" body=""
	I1213 10:47:54.773943  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:54.774296  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:55.273989  390588 type.go:168] "Request Body" body=""
	I1213 10:47:55.274065  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:55.274332  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:55.274372  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:55.774037  390588 type.go:168] "Request Body" body=""
	I1213 10:47:55.774114  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:55.774457  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:56.274294  390588 type.go:168] "Request Body" body=""
	I1213 10:47:56.274368  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:56.274696  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:56.774209  390588 type.go:168] "Request Body" body=""
	I1213 10:47:56.774284  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:56.774573  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:57.274365  390588 type.go:168] "Request Body" body=""
	I1213 10:47:57.274443  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:57.274796  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:57.274856  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:57.774615  390588 type.go:168] "Request Body" body=""
	I1213 10:47:57.774691  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:57.775029  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:58.274293  390588 type.go:168] "Request Body" body=""
	I1213 10:47:58.274363  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:58.274642  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:58.774411  390588 type.go:168] "Request Body" body=""
	I1213 10:47:58.774519  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:58.774841  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:59.274495  390588 type.go:168] "Request Body" body=""
	I1213 10:47:59.274571  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:59.274905  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:59.274961  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:59.774120  390588 type.go:168] "Request Body" body=""
	I1213 10:47:59.774186  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:59.774529  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:00.274587  390588 type.go:168] "Request Body" body=""
	I1213 10:48:00.274674  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:00.275002  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:00.773691  390588 type.go:168] "Request Body" body=""
	I1213 10:48:00.773785  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:00.774128  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:01.273694  390588 type.go:168] "Request Body" body=""
	I1213 10:48:01.273766  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:01.274084  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:01.773820  390588 type.go:168] "Request Body" body=""
	I1213 10:48:01.773905  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:01.774301  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:01.774362  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:02.273866  390588 type.go:168] "Request Body" body=""
	I1213 10:48:02.273943  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:02.274265  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:02.773719  390588 type.go:168] "Request Body" body=""
	I1213 10:48:02.773929  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:02.774221  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:03.273773  390588 type.go:168] "Request Body" body=""
	I1213 10:48:03.273855  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:03.274182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:03.773768  390588 type.go:168] "Request Body" body=""
	I1213 10:48:03.773848  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:03.774192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:04.274348  390588 type.go:168] "Request Body" body=""
	I1213 10:48:04.274421  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:04.274701  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:04.274747  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:04.774520  390588 type.go:168] "Request Body" body=""
	I1213 10:48:04.774598  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:04.774955  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:05.274625  390588 type.go:168] "Request Body" body=""
	I1213 10:48:05.274699  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:05.275061  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:05.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:48:05.773840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:05.774191  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:06.273741  390588 type.go:168] "Request Body" body=""
	I1213 10:48:06.273822  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:06.274167  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:06.773880  390588 type.go:168] "Request Body" body=""
	I1213 10:48:06.773956  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:06.774280  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:06.774339  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:07.273666  390588 type.go:168] "Request Body" body=""
	I1213 10:48:07.273739  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:07.274015  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:07.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:48:07.773867  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:07.774227  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:08.273802  390588 type.go:168] "Request Body" body=""
	I1213 10:48:08.273887  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:08.274253  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:08.774404  390588 type.go:168] "Request Body" body=""
	I1213 10:48:08.774472  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:08.774731  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:08.774771  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:09.274521  390588 type.go:168] "Request Body" body=""
	I1213 10:48:09.274602  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:09.274979  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:09.774731  390588 type.go:168] "Request Body" body=""
	I1213 10:48:09.774819  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:09.775148  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:10.274501  390588 type.go:168] "Request Body" body=""
	I1213 10:48:10.274577  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:10.274825  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:10.774685  390588 type.go:168] "Request Body" body=""
	I1213 10:48:10.774760  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:10.775071  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:10.775127  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:11.273657  390588 type.go:168] "Request Body" body=""
	I1213 10:48:11.273737  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:11.274080  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:11.774554  390588 type.go:168] "Request Body" body=""
	I1213 10:48:11.774619  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:11.774916  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:12.274606  390588 type.go:168] "Request Body" body=""
	I1213 10:48:12.274685  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:12.275008  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:12.773772  390588 type.go:168] "Request Body" body=""
	I1213 10:48:12.773849  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:12.774196  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:13.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:48:13.273790  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:13.274085  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:13.274132  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:13.773694  390588 type.go:168] "Request Body" body=""
	I1213 10:48:13.773768  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:13.774050  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:14.273699  390588 type.go:168] "Request Body" body=""
	I1213 10:48:14.273776  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:14.274097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:14.773688  390588 type.go:168] "Request Body" body=""
	I1213 10:48:14.773757  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:14.774016  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:15.273762  390588 type.go:168] "Request Body" body=""
	I1213 10:48:15.273837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:15.274160  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:15.274217  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:15.773796  390588 type.go:168] "Request Body" body=""
	I1213 10:48:15.773874  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:15.774220  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:16.273918  390588 type.go:168] "Request Body" body=""
	I1213 10:48:16.274004  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:16.274258  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:16.773913  390588 type.go:168] "Request Body" body=""
	I1213 10:48:16.773993  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:16.774333  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:17.273914  390588 type.go:168] "Request Body" body=""
	I1213 10:48:17.273989  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:17.274304  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:17.274360  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:17.773705  390588 type.go:168] "Request Body" body=""
	I1213 10:48:17.773779  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:17.774047  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:18.273778  390588 type.go:168] "Request Body" body=""
	I1213 10:48:18.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:18.274175  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:18.773780  390588 type.go:168] "Request Body" body=""
	I1213 10:48:18.773874  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:18.774242  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:19.274520  390588 type.go:168] "Request Body" body=""
	I1213 10:48:19.274589  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:19.274852  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:19.274893  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:19.774642  390588 type.go:168] "Request Body" body=""
	I1213 10:48:19.774722  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:19.775081  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:20.273688  390588 type.go:168] "Request Body" body=""
	I1213 10:48:20.273761  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:20.274090  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:20.773877  390588 type.go:168] "Request Body" body=""
	I1213 10:48:20.773951  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:20.774252  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:21.274225  390588 type.go:168] "Request Body" body=""
	I1213 10:48:21.274303  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:21.274658  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:21.774461  390588 type.go:168] "Request Body" body=""
	I1213 10:48:21.774542  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:21.774931  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:21.774990  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:22.273646  390588 type.go:168] "Request Body" body=""
	I1213 10:48:22.273719  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:22.273971  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:22.773678  390588 type.go:168] "Request Body" body=""
	I1213 10:48:22.773773  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:22.774157  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:23.273879  390588 type.go:168] "Request Body" body=""
	I1213 10:48:23.273951  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:23.274270  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:23.774466  390588 type.go:168] "Request Body" body=""
	I1213 10:48:23.774555  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:23.774828  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:24.274703  390588 type.go:168] "Request Body" body=""
	I1213 10:48:24.274778  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:24.275113  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:24.275166  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:24.773777  390588 type.go:168] "Request Body" body=""
	I1213 10:48:24.773853  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:24.774193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:25.273716  390588 type.go:168] "Request Body" body=""
	I1213 10:48:25.273787  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:25.274055  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:25.773749  390588 type.go:168] "Request Body" body=""
	I1213 10:48:25.773830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:25.774156  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:26.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:48:26.273812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:26.274134  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:26.774405  390588 type.go:168] "Request Body" body=""
	I1213 10:48:26.774477  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:26.774735  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:26.774777  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:27.274550  390588 type.go:168] "Request Body" body=""
	I1213 10:48:27.274638  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:27.274990  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:27.773699  390588 type.go:168] "Request Body" body=""
	I1213 10:48:27.773775  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:27.774125  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:28.274454  390588 type.go:168] "Request Body" body=""
	I1213 10:48:28.274531  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:28.274852  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:28.774642  390588 type.go:168] "Request Body" body=""
	I1213 10:48:28.774713  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:28.775023  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:28.775072  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:29.273765  390588 type.go:168] "Request Body" body=""
	I1213 10:48:29.273840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:29.274166  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:29.773685  390588 type.go:168] "Request Body" body=""
	I1213 10:48:29.773767  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:29.774067  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:30.273776  390588 type.go:168] "Request Body" body=""
	I1213 10:48:30.273858  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:30.274172  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:30.773721  390588 type.go:168] "Request Body" body=""
	I1213 10:48:30.773801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:30.774182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:31.273888  390588 type.go:168] "Request Body" body=""
	I1213 10:48:31.273960  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:31.274245  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:31.274287  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:31.773960  390588 type.go:168] "Request Body" body=""
	I1213 10:48:31.774033  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:31.774353  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:32.273790  390588 type.go:168] "Request Body" body=""
	I1213 10:48:32.273874  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:32.274212  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:32.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:48:32.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:32.774110  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:33.273773  390588 type.go:168] "Request Body" body=""
	I1213 10:48:33.273854  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:33.274167  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:33.773768  390588 type.go:168] "Request Body" body=""
	I1213 10:48:33.773850  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:33.774195  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:33.774250  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:34.274441  390588 type.go:168] "Request Body" body=""
	I1213 10:48:34.274551  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:34.274859  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:34.774530  390588 type.go:168] "Request Body" body=""
	I1213 10:48:34.774653  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:34.774994  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:35.273709  390588 type.go:168] "Request Body" body=""
	I1213 10:48:35.273790  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:35.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:35.773804  390588 type.go:168] "Request Body" body=""
	I1213 10:48:35.773871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:35.774121  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:36.273709  390588 type.go:168] "Request Body" body=""
	I1213 10:48:36.273787  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:36.274129  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:36.274191  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:36.773868  390588 type.go:168] "Request Body" body=""
	I1213 10:48:36.773953  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:36.774291  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:37.273713  390588 type.go:168] "Request Body" body=""
	I1213 10:48:37.273782  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:37.274052  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:37.773728  390588 type.go:168] "Request Body" body=""
	I1213 10:48:37.773807  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:37.774133  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:38.273695  390588 type.go:168] "Request Body" body=""
	I1213 10:48:38.273771  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:38.274096  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:38.774434  390588 type.go:168] "Request Body" body=""
	I1213 10:48:38.774523  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:38.774857  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:38.774915  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:39.274697  390588 type.go:168] "Request Body" body=""
	I1213 10:48:39.274775  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:39.275116  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:39.773799  390588 type.go:168] "Request Body" body=""
	I1213 10:48:39.773875  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:39.774219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:40.274392  390588 type.go:168] "Request Body" body=""
	I1213 10:48:40.274461  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:40.274778  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:40.774600  390588 type.go:168] "Request Body" body=""
	I1213 10:48:40.774675  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:40.774999  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:40.775056  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:41.273683  390588 type.go:168] "Request Body" body=""
	I1213 10:48:41.273758  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:41.274099  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:41.774223  390588 type.go:168] "Request Body" body=""
	I1213 10:48:41.774306  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:41.774579  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:42.274405  390588 type.go:168] "Request Body" body=""
	I1213 10:48:42.274535  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:42.274934  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:42.774574  390588 type.go:168] "Request Body" body=""
	I1213 10:48:42.774658  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:42.775003  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:43.273697  390588 type.go:168] "Request Body" body=""
	I1213 10:48:43.273772  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:43.274034  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:43.274076  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:43.773741  390588 type.go:168] "Request Body" body=""
	I1213 10:48:43.773825  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:43.774164  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:44.273866  390588 type.go:168] "Request Body" body=""
	I1213 10:48:44.273947  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:44.274284  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:44.773701  390588 type.go:168] "Request Body" body=""
	I1213 10:48:44.773793  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:44.774141  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:45.273825  390588 type.go:168] "Request Body" body=""
	I1213 10:48:45.273925  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:45.274348  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:45.274406  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:45.774078  390588 type.go:168] "Request Body" body=""
	I1213 10:48:45.774155  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:45.774567  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:46.274333  390588 type.go:168] "Request Body" body=""
	I1213 10:48:46.274401  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:46.274668  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:46.774394  390588 type.go:168] "Request Body" body=""
	I1213 10:48:46.774466  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:46.774810  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:47.274617  390588 type.go:168] "Request Body" body=""
	I1213 10:48:47.274705  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:47.275033  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:47.275083  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:47.774292  390588 type.go:168] "Request Body" body=""
	I1213 10:48:47.774364  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:47.774696  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:48.274508  390588 type.go:168] "Request Body" body=""
	I1213 10:48:48.274590  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:48.274935  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:48.774610  390588 type.go:168] "Request Body" body=""
	I1213 10:48:48.774685  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:48.775020  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:49.273715  390588 type.go:168] "Request Body" body=""
	I1213 10:48:49.273781  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:49.274042  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:49.773747  390588 type.go:168] "Request Body" body=""
	I1213 10:48:49.773829  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:49.774155  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:49.774228  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:50.273926  390588 type.go:168] "Request Body" body=""
	I1213 10:48:50.274002  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:50.274364  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:50.774202  390588 type.go:168] "Request Body" body=""
	I1213 10:48:50.774276  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:50.774536  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:51.274422  390588 type.go:168] "Request Body" body=""
	I1213 10:48:51.274498  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:51.274822  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:51.774623  390588 type.go:168] "Request Body" body=""
	I1213 10:48:51.774699  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:51.775050  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:51.775104  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:52.273779  390588 type.go:168] "Request Body" body=""
	I1213 10:48:52.273845  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:52.274097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:52.773759  390588 type.go:168] "Request Body" body=""
	I1213 10:48:52.773834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:52.774161  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:53.273848  390588 type.go:168] "Request Body" body=""
	I1213 10:48:53.273927  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:53.274265  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:53.773713  390588 type.go:168] "Request Body" body=""
	I1213 10:48:53.773788  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:53.774090  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:54.273763  390588 type.go:168] "Request Body" body=""
	I1213 10:48:54.273840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:54.274182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:54.274238  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:54.773755  390588 type.go:168] "Request Body" body=""
	I1213 10:48:54.773839  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:54.774143  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:55.273671  390588 type.go:168] "Request Body" body=""
	I1213 10:48:55.273739  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:55.273994  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:55.773662  390588 type.go:168] "Request Body" body=""
	I1213 10:48:55.773743  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:55.774113  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:56.274020  390588 type.go:168] "Request Body" body=""
	I1213 10:48:56.274092  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:56.274398  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:56.274455  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:56.773718  390588 type.go:168] "Request Body" body=""
	I1213 10:48:56.773786  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:56.774114  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:57.273796  390588 type.go:168] "Request Body" body=""
	I1213 10:48:57.273875  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:57.274202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:57.773898  390588 type.go:168] "Request Body" body=""
	I1213 10:48:57.773979  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:57.774308  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:58.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:48:58.273790  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:58.274114  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:58.773788  390588 type.go:168] "Request Body" body=""
	I1213 10:48:58.773908  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:58.774247  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:58.774302  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:59.273809  390588 type.go:168] "Request Body" body=""
	I1213 10:48:59.273892  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:59.274236  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:59.773708  390588 type.go:168] "Request Body" body=""
	I1213 10:48:59.773786  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:59.774102  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:00.273835  390588 type.go:168] "Request Body" body=""
	I1213 10:49:00.273945  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:00.274259  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:00.774386  390588 type.go:168] "Request Body" body=""
	I1213 10:49:00.774468  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:00.774788  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:00.774843  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:01.274715  390588 type.go:168] "Request Body" body=""
	I1213 10:49:01.274784  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:01.275080  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:01.773784  390588 type.go:168] "Request Body" body=""
	I1213 10:49:01.773863  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:01.774155  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:02.273798  390588 type.go:168] "Request Body" body=""
	I1213 10:49:02.273897  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:02.274252  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:02.773815  390588 type.go:168] "Request Body" body=""
	I1213 10:49:02.773883  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:02.774152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:03.273838  390588 type.go:168] "Request Body" body=""
	I1213 10:49:03.273923  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:03.274294  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:03.274348  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:03.773866  390588 type.go:168] "Request Body" body=""
	I1213 10:49:03.773946  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:03.774285  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:04.273977  390588 type.go:168] "Request Body" body=""
	I1213 10:49:04.274050  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:04.274314  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:04.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:49:04.773838  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:04.774178  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:05.273888  390588 type.go:168] "Request Body" body=""
	I1213 10:49:05.273962  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:05.274293  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:05.773962  390588 type.go:168] "Request Body" body=""
	I1213 10:49:05.774033  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:05.774279  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:05.774317  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:06.274277  390588 type.go:168] "Request Body" body=""
	I1213 10:49:06.274357  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:06.274684  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:06.774350  390588 type.go:168] "Request Body" body=""
	I1213 10:49:06.774429  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:06.774754  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:07.274072  390588 type.go:168] "Request Body" body=""
	I1213 10:49:07.274145  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:07.274401  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:07.773761  390588 type.go:168] "Request Body" body=""
	I1213 10:49:07.773839  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:07.774168  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:08.273771  390588 type.go:168] "Request Body" body=""
	I1213 10:49:08.273852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:08.274170  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:08.274229  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:08.773716  390588 type.go:168] "Request Body" body=""
	I1213 10:49:08.773793  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:08.774102  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:09.273765  390588 type.go:168] "Request Body" body=""
	I1213 10:49:09.273840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:09.274179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:09.773911  390588 type.go:168] "Request Body" body=""
	I1213 10:49:09.773987  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:09.774329  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:10.274643  390588 type.go:168] "Request Body" body=""
	I1213 10:49:10.274715  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:10.275018  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:10.275073  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:10.774631  390588 type.go:168] "Request Body" body=""
	I1213 10:49:10.774708  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:10.775082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:11.273712  390588 type.go:168] "Request Body" body=""
	I1213 10:49:11.273785  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:11.274118  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:11.773811  390588 type.go:168] "Request Body" body=""
	I1213 10:49:11.773881  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:11.774141  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:12.273785  390588 type.go:168] "Request Body" body=""
	I1213 10:49:12.273860  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:12.274192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:12.773779  390588 type.go:168] "Request Body" body=""
	I1213 10:49:12.773865  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:12.774208  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:12.774264  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:13.274414  390588 type.go:168] "Request Body" body=""
	I1213 10:49:13.274491  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:13.274806  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:13.774595  390588 type.go:168] "Request Body" body=""
	I1213 10:49:13.774673  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:13.775019  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:14.274700  390588 type.go:168] "Request Body" body=""
	I1213 10:49:14.274776  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:14.275122  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:14.773666  390588 type.go:168] "Request Body" body=""
	I1213 10:49:14.773732  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:14.773982  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:15.273683  390588 type.go:168] "Request Body" body=""
	I1213 10:49:15.273760  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:15.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:15.274153  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:15.773812  390588 type.go:168] "Request Body" body=""
	I1213 10:49:15.773895  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:15.774230  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:16.273920  390588 type.go:168] "Request Body" body=""
	I1213 10:49:16.273995  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:16.274253  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:16.773782  390588 type.go:168] "Request Body" body=""
	I1213 10:49:16.773868  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:16.774406  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:17.274090  390588 type.go:168] "Request Body" body=""
	I1213 10:49:17.274171  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:17.274528  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:17.274584  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:17.774247  390588 type.go:168] "Request Body" body=""
	I1213 10:49:17.774320  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:17.774585  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:18.274376  390588 type.go:168] "Request Body" body=""
	I1213 10:49:18.274452  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:18.274800  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:18.774498  390588 type.go:168] "Request Body" body=""
	I1213 10:49:18.774575  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:18.774922  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:19.274279  390588 type.go:168] "Request Body" body=""
	I1213 10:49:19.274351  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:19.274659  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:19.274729  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:19.774509  390588 type.go:168] "Request Body" body=""
	I1213 10:49:19.774592  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:19.774934  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:20.273655  390588 type.go:168] "Request Body" body=""
	I1213 10:49:20.273729  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:20.274058  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:20.773657  390588 type.go:168] "Request Body" body=""
	I1213 10:49:20.773723  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:20.773970  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:21.273725  390588 type.go:168] "Request Body" body=""
	I1213 10:49:21.273834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:21.274179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:21.773895  390588 type.go:168] "Request Body" body=""
	W1213 10:49:21.773963  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
	I1213 10:49:21.773982  390588 node_ready.go:38] duration metric: took 6m0.000438977s for node "functional-407525" to be "Ready" ...
	I1213 10:49:21.777070  390588 out.go:203] 
	W1213 10:49:21.779923  390588 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:49:21.779945  390588 out.go:285] * 
	W1213 10:49:21.782066  390588 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:49:21.784854  390588 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.123701412Z" level=info msg="Using the internal default seccomp profile"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.123709592Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.123715558Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.123721006Z" level=info msg="RDT not available in the host system"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.123734454Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.124544973Z" level=info msg="Conmon does support the --sync option"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.124572083Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.124588945Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.125247847Z" level=info msg="Conmon does support the --sync option"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.125273513Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.125409662Z" level=info msg="Updated default CNI network name to "
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.125957779Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.126329836Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.126386468Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.176496877Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.176536107Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.176582655Z" level=info msg="Create NRI interface"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.176685655Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.17669427Z" level=info msg="runtime interface created"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.176705109Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.17671146Z" level=info msg="runtime interface starting up..."
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.17671745Z" level=info msg="starting plugins..."
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.176730611Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:43:19 functional-407525 crio[5356]: time="2025-12-13T10:43:19.176801118Z" level=info msg="No systemd watchdog enabled"
	Dec 13 10:43:19 functional-407525 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:49:26.151590    8703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:26.152012    8703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:26.153490    8703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:26.153796    8703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:26.155221    8703 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	[Dec13 10:24] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:25] overlayfs: idmapped layers are currently not supported
	[  +0.081323] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:31] overlayfs: idmapped layers are currently not supported
	[Dec13 10:32] overlayfs: idmapped layers are currently not supported
	[Dec13 10:42] hrtimer: interrupt took 21684953 ns
	
	
	==> kernel <==
	 10:49:26 up  2:31,  0 user,  load average: 0.25, 0.27, 0.71
	Linux functional-407525 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:49:23 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:49:24 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1139.
	Dec 13 10:49:24 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:24 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:24 functional-407525 kubelet[8576]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:24 functional-407525 kubelet[8576]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:24 functional-407525 kubelet[8576]: E1213 10:49:24.599107    8576 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:49:24 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:49:24 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:49:25 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1140.
	Dec 13 10:49:25 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:25 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:25 functional-407525 kubelet[8611]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:25 functional-407525 kubelet[8611]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:25 functional-407525 kubelet[8611]: E1213 10:49:25.339865    8611 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:49:25 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:49:25 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:49:26 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1141.
	Dec 13 10:49:26 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:26 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:26 functional-407525 kubelet[8683]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:26 functional-407525 kubelet[8683]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:26 functional-407525 kubelet[8683]: E1213 10:49:26.081604    8683 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:49:26 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:49:26 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525: exit status 2 (340.257847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-407525" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 kubectl -- --context functional-407525 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-407525 kubectl -- --context functional-407525 get pods: exit status 1 (111.371262ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-407525 kubectl -- --context functional-407525 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-407525
helpers_test.go:244: (dbg) docker inspect functional-407525:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	        "Created": "2025-12-13T10:34:59.162458661Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 385126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:34:59.230276401Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hostname",
	        "HostsPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hosts",
	        "LogPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7-json.log",
	        "Name": "/functional-407525",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-407525:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-407525",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	                "LowerDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-407525",
	                "Source": "/var/lib/docker/volumes/functional-407525/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-407525",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-407525",
	                "name.minikube.sigs.k8s.io": "functional-407525",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb8c72e3de62f4751cebe2c5a489ec3040a7f771c4c912b4414d5eb26c67d8e4",
	            "SandboxKey": "/var/run/docker/netns/fb8c72e3de62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-407525": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:c5:1d:c8:5d:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8bb3fce07852261971da0e26f4e28c90471b6da820443a0b657c0bf09d2f7042",
	                    "EndpointID": "3a907b06ccc449fc18f0cf71710374046514d7011757e3e81bb1c73b267fe8c9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-407525",
	                        "7fc3d6bd328a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525: exit status 2 (325.229921ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-407525 logs -n 25: (1.032898175s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-371413 image ls --format short --alsologtostderr                                                                                       │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image   │ functional-371413 image ls --format yaml --alsologtostderr                                                                                        │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ ssh     │ functional-371413 ssh pgrep buildkitd                                                                                                             │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ image   │ functional-371413 image build -t localhost/my-image:functional-371413 testdata/build --alsologtostderr                                            │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image   │ functional-371413 image ls                                                                                                                        │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image   │ functional-371413 image ls --format json --alsologtostderr                                                                                        │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image   │ functional-371413 image ls --format table --alsologtostderr                                                                                       │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ delete  │ -p functional-371413                                                                                                                              │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ start   │ -p functional-407525 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ start   │ -p functional-407525 --alsologtostderr -v=8                                                                                                       │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:43 UTC │                     │
	│ cache   │ functional-407525 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ functional-407525 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ functional-407525 cache add registry.k8s.io/pause:latest                                                                                          │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ functional-407525 cache add minikube-local-cache-test:functional-407525                                                                           │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ functional-407525 cache delete minikube-local-cache-test:functional-407525                                                                        │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ ssh     │ functional-407525 ssh sudo crictl images                                                                                                          │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ ssh     │ functional-407525 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ ssh     │ functional-407525 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │                     │
	│ cache   │ functional-407525 cache reload                                                                                                                    │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ ssh     │ functional-407525 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ kubectl │ functional-407525 kubectl -- --context functional-407525 get pods                                                                                 │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:43:16
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:43:16.189245  390588 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:43:16.189385  390588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:43:16.189397  390588 out.go:374] Setting ErrFile to fd 2...
	I1213 10:43:16.189403  390588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:43:16.189684  390588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:43:16.190095  390588 out.go:368] Setting JSON to false
	I1213 10:43:16.190986  390588 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8749,"bootTime":1765613848,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:43:16.191060  390588 start.go:143] virtualization:  
	I1213 10:43:16.194511  390588 out.go:179] * [functional-407525] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:43:16.198204  390588 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:43:16.198321  390588 notify.go:221] Checking for updates...
	I1213 10:43:16.204163  390588 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:43:16.207088  390588 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:16.209934  390588 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 10:43:16.212863  390588 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:43:16.215711  390588 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:43:16.219166  390588 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:43:16.219330  390588 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:43:16.245531  390588 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:43:16.245660  390588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:43:16.304777  390588 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:43:16.295770012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:43:16.304888  390588 docker.go:319] overlay module found
	I1213 10:43:16.309644  390588 out.go:179] * Using the docker driver based on existing profile
	I1213 10:43:16.312430  390588 start.go:309] selected driver: docker
	I1213 10:43:16.312447  390588 start.go:927] validating driver "docker" against &{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:43:16.312556  390588 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:43:16.312654  390588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:43:16.369591  390588 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:43:16.360947105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:43:16.370024  390588 cni.go:84] Creating CNI manager for ""
	I1213 10:43:16.370077  390588 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:43:16.370130  390588 start.go:353] cluster config:
	{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:43:16.374951  390588 out.go:179] * Starting "functional-407525" primary control-plane node in "functional-407525" cluster
	I1213 10:43:16.377750  390588 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:43:16.380575  390588 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:43:16.383625  390588 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:43:16.383675  390588 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:43:16.383684  390588 cache.go:65] Caching tarball of preloaded images
	I1213 10:43:16.383721  390588 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:43:16.383768  390588 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:43:16.383779  390588 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 10:43:16.383909  390588 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/config.json ...
	I1213 10:43:16.402414  390588 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:43:16.402437  390588 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:43:16.402458  390588 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:43:16.402490  390588 start.go:360] acquireMachinesLock for functional-407525: {Name:mkb9a6ddeb0e93e626919e03dc3c989f045e07da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:43:16.402563  390588 start.go:364] duration metric: took 38.359µs to acquireMachinesLock for "functional-407525"
	I1213 10:43:16.402589  390588 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:43:16.402599  390588 fix.go:54] fixHost starting: 
	I1213 10:43:16.402860  390588 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:43:16.419664  390588 fix.go:112] recreateIfNeeded on functional-407525: state=Running err=<nil>
	W1213 10:43:16.419692  390588 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:43:16.423019  390588 out.go:252] * Updating the running docker "functional-407525" container ...
	I1213 10:43:16.423065  390588 machine.go:94] provisionDockerMachine start ...
	I1213 10:43:16.423166  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:16.440791  390588 main.go:143] libmachine: Using SSH client type: native
	I1213 10:43:16.441132  390588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:43:16.441147  390588 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:43:16.590928  390588 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-407525
	
	I1213 10:43:16.590952  390588 ubuntu.go:182] provisioning hostname "functional-407525"
	I1213 10:43:16.591012  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:16.608907  390588 main.go:143] libmachine: Using SSH client type: native
	I1213 10:43:16.609223  390588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:43:16.609243  390588 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-407525 && echo "functional-407525" | sudo tee /etc/hostname
	I1213 10:43:16.770512  390588 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-407525
	
	I1213 10:43:16.770629  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:16.791074  390588 main.go:143] libmachine: Using SSH client type: native
	I1213 10:43:16.791392  390588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:43:16.791418  390588 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-407525' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-407525/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-407525' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:43:16.939938  390588 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:43:16.939965  390588 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 10:43:16.940042  390588 ubuntu.go:190] setting up certificates
	I1213 10:43:16.940060  390588 provision.go:84] configureAuth start
	I1213 10:43:16.940146  390588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-407525
	I1213 10:43:16.959231  390588 provision.go:143] copyHostCerts
	I1213 10:43:16.959277  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 10:43:16.959321  390588 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 10:43:16.959334  390588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 10:43:16.959423  390588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 10:43:16.959550  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 10:43:16.959579  390588 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 10:43:16.959590  390588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 10:43:16.959624  390588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 10:43:16.959682  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 10:43:16.959708  390588 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 10:43:16.959712  390588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 10:43:16.959738  390588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 10:43:16.959842  390588 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.functional-407525 san=[127.0.0.1 192.168.49.2 functional-407525 localhost minikube]
	I1213 10:43:17.067458  390588 provision.go:177] copyRemoteCerts
	I1213 10:43:17.067620  390588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:43:17.067673  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.087609  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:17.191151  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 10:43:17.191266  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:43:17.208031  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 10:43:17.208139  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:43:17.224829  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 10:43:17.224888  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:43:17.242075  390588 provision.go:87] duration metric: took 301.967659ms to configureAuth
	I1213 10:43:17.242106  390588 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:43:17.242287  390588 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:43:17.242396  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.259726  390588 main.go:143] libmachine: Using SSH client type: native
	I1213 10:43:17.260059  390588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:43:17.260089  390588 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:43:17.589136  390588 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:43:17.589164  390588 machine.go:97] duration metric: took 1.166089785s to provisionDockerMachine
	I1213 10:43:17.589176  390588 start.go:293] postStartSetup for "functional-407525" (driver="docker")
	I1213 10:43:17.589189  390588 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:43:17.589251  390588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:43:17.589299  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.609214  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:17.715839  390588 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:43:17.719089  390588 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 10:43:17.719109  390588 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 10:43:17.719114  390588 command_runner.go:130] > VERSION_ID="12"
	I1213 10:43:17.719118  390588 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 10:43:17.719124  390588 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 10:43:17.719128  390588 command_runner.go:130] > ID=debian
	I1213 10:43:17.719139  390588 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 10:43:17.719147  390588 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 10:43:17.719152  390588 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 10:43:17.719195  390588 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:43:17.719216  390588 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:43:17.719233  390588 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 10:43:17.719286  390588 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 10:43:17.719370  390588 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 10:43:17.719381  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> /etc/ssl/certs/3563282.pem
	I1213 10:43:17.719455  390588 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts -> hosts in /etc/test/nested/copy/356328
	I1213 10:43:17.719463  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts -> /etc/test/nested/copy/356328/hosts
	I1213 10:43:17.719505  390588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/356328
	I1213 10:43:17.727090  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 10:43:17.744131  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts --> /etc/test/nested/copy/356328/hosts (40 bytes)
	I1213 10:43:17.760861  390588 start.go:296] duration metric: took 171.654498ms for postStartSetup
	I1213 10:43:17.760950  390588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:43:17.760996  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.777913  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:17.880295  390588 command_runner.go:130] > 14%
	I1213 10:43:17.880360  390588 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:43:17.884436  390588 command_runner.go:130] > 169G
	I1213 10:43:17.884867  390588 fix.go:56] duration metric: took 1.482264041s for fixHost
	I1213 10:43:17.884887  390588 start.go:83] releasing machines lock for "functional-407525", held for 1.482310261s
	I1213 10:43:17.884953  390588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-407525
	I1213 10:43:17.902293  390588 ssh_runner.go:195] Run: cat /version.json
	I1213 10:43:17.902324  390588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:43:17.902343  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.902383  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.922251  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:17.922884  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:18.027684  390588 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 10:43:18.027820  390588 ssh_runner.go:195] Run: systemctl --version
	I1213 10:43:18.121469  390588 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 10:43:18.124198  390588 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 10:43:18.124239  390588 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 10:43:18.124329  390588 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:43:18.162710  390588 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 10:43:18.167030  390588 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 10:43:18.167242  390588 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:43:18.167335  390588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:43:18.175207  390588 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:43:18.175230  390588 start.go:496] detecting cgroup driver to use...
	I1213 10:43:18.175264  390588 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:43:18.175320  390588 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:43:18.190633  390588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:43:18.203672  390588 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:43:18.203747  390588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:43:18.219163  390588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:43:18.232309  390588 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:43:18.357889  390588 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:43:18.493929  390588 docker.go:234] disabling docker service ...
	I1213 10:43:18.494052  390588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:43:18.509796  390588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:43:18.523416  390588 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:43:18.655317  390588 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:43:18.778247  390588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:43:18.791182  390588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:43:18.805083  390588 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1213 10:43:18.806588  390588 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:43:18.806679  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.815701  390588 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:43:18.815803  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.824913  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.834321  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.843170  390588 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:43:18.851373  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.860701  390588 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.869075  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.877860  390588 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:43:18.884514  390588 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 10:43:18.885462  390588 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:43:18.893210  390588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:43:19.009167  390588 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:43:19.185094  390588 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:43:19.185195  390588 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:43:19.189492  390588 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1213 10:43:19.189518  390588 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 10:43:19.189526  390588 command_runner.go:130] > Device: 0,72	Inode: 1638        Links: 1
	I1213 10:43:19.189541  390588 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:43:19.189566  390588 command_runner.go:130] > Access: 2025-12-13 10:43:19.120971949 +0000
	I1213 10:43:19.189581  390588 command_runner.go:130] > Modify: 2025-12-13 10:43:19.120971949 +0000
	I1213 10:43:19.189586  390588 command_runner.go:130] > Change: 2025-12-13 10:43:19.120971949 +0000
	I1213 10:43:19.189590  390588 command_runner.go:130] >  Birth: -
	I1213 10:43:19.190244  390588 start.go:564] Will wait 60s for crictl version
	I1213 10:43:19.190335  390588 ssh_runner.go:195] Run: which crictl
	I1213 10:43:19.193561  390588 command_runner.go:130] > /usr/local/bin/crictl
	I1213 10:43:19.194286  390588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:43:19.222711  390588 command_runner.go:130] > Version:  0.1.0
	I1213 10:43:19.222747  390588 command_runner.go:130] > RuntimeName:  cri-o
	I1213 10:43:19.222752  390588 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1213 10:43:19.222773  390588 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 10:43:19.225058  390588 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:43:19.225194  390588 ssh_runner.go:195] Run: crio --version
	I1213 10:43:19.255970  390588 command_runner.go:130] > crio version 1.34.3
	I1213 10:43:19.256013  390588 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 10:43:19.256019  390588 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 10:43:19.256025  390588 command_runner.go:130] >    GitTreeState:   dirty
	I1213 10:43:19.256044  390588 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 10:43:19.256051  390588 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 10:43:19.256078  390588 command_runner.go:130] >    Compiler:       gc
	I1213 10:43:19.256090  390588 command_runner.go:130] >    Platform:       linux/arm64
	I1213 10:43:19.256094  390588 command_runner.go:130] >    Linkmode:       static
	I1213 10:43:19.256098  390588 command_runner.go:130] >    BuildTags:
	I1213 10:43:19.256105  390588 command_runner.go:130] >      static
	I1213 10:43:19.256109  390588 command_runner.go:130] >      netgo
	I1213 10:43:19.256113  390588 command_runner.go:130] >      osusergo
	I1213 10:43:19.256117  390588 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 10:43:19.256123  390588 command_runner.go:130] >      seccomp
	I1213 10:43:19.256128  390588 command_runner.go:130] >      apparmor
	I1213 10:43:19.256131  390588 command_runner.go:130] >      selinux
	I1213 10:43:19.256136  390588 command_runner.go:130] >    LDFlags:          unknown
	I1213 10:43:19.256166  390588 command_runner.go:130] >    SeccompEnabled:   true
	I1213 10:43:19.256195  390588 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 10:43:19.258161  390588 ssh_runner.go:195] Run: crio --version
	I1213 10:43:19.285922  390588 command_runner.go:130] > crio version 1.34.3
	I1213 10:43:19.285950  390588 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 10:43:19.285964  390588 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 10:43:19.285970  390588 command_runner.go:130] >    GitTreeState:   dirty
	I1213 10:43:19.285975  390588 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 10:43:19.285999  390588 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 10:43:19.286010  390588 command_runner.go:130] >    Compiler:       gc
	I1213 10:43:19.286017  390588 command_runner.go:130] >    Platform:       linux/arm64
	I1213 10:43:19.286022  390588 command_runner.go:130] >    Linkmode:       static
	I1213 10:43:19.286028  390588 command_runner.go:130] >    BuildTags:
	I1213 10:43:19.286046  390588 command_runner.go:130] >      static
	I1213 10:43:19.286056  390588 command_runner.go:130] >      netgo
	I1213 10:43:19.286061  390588 command_runner.go:130] >      osusergo
	I1213 10:43:19.286075  390588 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 10:43:19.286093  390588 command_runner.go:130] >      seccomp
	I1213 10:43:19.286102  390588 command_runner.go:130] >      apparmor
	I1213 10:43:19.286108  390588 command_runner.go:130] >      selinux
	I1213 10:43:19.286132  390588 command_runner.go:130] >    LDFlags:          unknown
	I1213 10:43:19.286137  390588 command_runner.go:130] >    SeccompEnabled:   true
	I1213 10:43:19.286153  390588 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 10:43:19.291101  390588 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 10:43:19.293929  390588 cli_runner.go:164] Run: docker network inspect functional-407525 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:43:19.310541  390588 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:43:19.314437  390588 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 10:43:19.314776  390588 kubeadm.go:884] updating cluster {Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:43:19.314904  390588 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:43:19.314962  390588 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:43:19.346332  390588 command_runner.go:130] > {
	I1213 10:43:19.346357  390588 command_runner.go:130] >   "images":  [
	I1213 10:43:19.346361  390588 command_runner.go:130] >     {
	I1213 10:43:19.346369  390588 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:43:19.346374  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346380  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:43:19.346383  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346387  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346396  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 10:43:19.346404  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 10:43:19.346411  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346416  390588 command_runner.go:130] >       "size":  "111333938",
	I1213 10:43:19.346423  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346429  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346436  390588 command_runner.go:130] >     },
	I1213 10:43:19.346439  390588 command_runner.go:130] >     {
	I1213 10:43:19.346445  390588 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:43:19.346449  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346457  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:43:19.346467  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346472  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346480  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 10:43:19.346491  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:43:19.346494  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346508  390588 command_runner.go:130] >       "size":  "29037500",
	I1213 10:43:19.346518  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346525  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346531  390588 command_runner.go:130] >     },
	I1213 10:43:19.346535  390588 command_runner.go:130] >     {
	I1213 10:43:19.346541  390588 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:43:19.346548  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346553  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:43:19.346556  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346563  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346571  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 10:43:19.346582  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 10:43:19.346586  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346590  390588 command_runner.go:130] >       "size":  "74491780",
	I1213 10:43:19.346594  390588 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:43:19.346600  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346604  390588 command_runner.go:130] >     },
	I1213 10:43:19.346610  390588 command_runner.go:130] >     {
	I1213 10:43:19.346616  390588 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:43:19.346621  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346628  390588 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:43:19.346632  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346636  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346646  390588 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 10:43:19.346657  390588 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 10:43:19.346661  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346667  390588 command_runner.go:130] >       "size":  "60857170",
	I1213 10:43:19.346671  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.346675  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.346679  390588 command_runner.go:130] >       },
	I1213 10:43:19.346690  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346698  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346702  390588 command_runner.go:130] >     },
	I1213 10:43:19.346705  390588 command_runner.go:130] >     {
	I1213 10:43:19.346715  390588 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:43:19.346722  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346728  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:43:19.346731  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346736  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346745  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 10:43:19.346760  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 10:43:19.346764  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346768  390588 command_runner.go:130] >       "size":  "84949999",
	I1213 10:43:19.346775  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.346778  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.346782  390588 command_runner.go:130] >       },
	I1213 10:43:19.346786  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346796  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346799  390588 command_runner.go:130] >     },
	I1213 10:43:19.346802  390588 command_runner.go:130] >     {
	I1213 10:43:19.346811  390588 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:43:19.346818  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346824  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:43:19.346828  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346832  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346842  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 10:43:19.346851  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 10:43:19.346859  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346863  390588 command_runner.go:130] >       "size":  "72170325",
	I1213 10:43:19.346866  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.346870  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.346875  390588 command_runner.go:130] >       },
	I1213 10:43:19.346879  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346886  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346889  390588 command_runner.go:130] >     },
	I1213 10:43:19.346892  390588 command_runner.go:130] >     {
	I1213 10:43:19.346898  390588 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:43:19.346911  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346917  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:43:19.346923  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346927  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346934  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 10:43:19.346946  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:43:19.346950  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346954  390588 command_runner.go:130] >       "size":  "74106775",
	I1213 10:43:19.346958  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346964  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346967  390588 command_runner.go:130] >     },
	I1213 10:43:19.346970  390588 command_runner.go:130] >     {
	I1213 10:43:19.346977  390588 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:43:19.346984  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346990  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:43:19.346993  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346997  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.347007  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 10:43:19.347027  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 10:43:19.347034  390588 command_runner.go:130] >       ],
	I1213 10:43:19.347038  390588 command_runner.go:130] >       "size":  "49822549",
	I1213 10:43:19.347041  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.347045  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.347048  390588 command_runner.go:130] >       },
	I1213 10:43:19.347053  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.347058  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.347062  390588 command_runner.go:130] >     },
	I1213 10:43:19.347065  390588 command_runner.go:130] >     {
	I1213 10:43:19.347072  390588 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:43:19.347078  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.347083  390588 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:43:19.347087  390588 command_runner.go:130] >       ],
	I1213 10:43:19.347097  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.347109  390588 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 10:43:19.347120  390588 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 10:43:19.347124  390588 command_runner.go:130] >       ],
	I1213 10:43:19.347132  390588 command_runner.go:130] >       "size":  "519884",
	I1213 10:43:19.347135  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.347140  390588 command_runner.go:130] >         "value":  "65535"
	I1213 10:43:19.347145  390588 command_runner.go:130] >       },
	I1213 10:43:19.347149  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.347155  390588 command_runner.go:130] >       "pinned":  true
	I1213 10:43:19.347158  390588 command_runner.go:130] >     }
	I1213 10:43:19.347161  390588 command_runner.go:130] >   ]
	I1213 10:43:19.347164  390588 command_runner.go:130] > }
	I1213 10:43:19.347379  390588 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:43:19.347391  390588 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:43:19.347452  390588 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:43:19.372755  390588 command_runner.go:130] > {
	I1213 10:43:19.372774  390588 command_runner.go:130] >   "images":  [
	I1213 10:43:19.372779  390588 command_runner.go:130] >     {
	I1213 10:43:19.372788  390588 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:43:19.372792  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.372799  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:43:19.372803  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372807  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.372816  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 10:43:19.372824  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 10:43:19.372828  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372832  390588 command_runner.go:130] >       "size":  "111333938",
	I1213 10:43:19.372836  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.372851  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.372854  390588 command_runner.go:130] >     },
	I1213 10:43:19.372857  390588 command_runner.go:130] >     {
	I1213 10:43:19.372863  390588 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:43:19.372868  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.372873  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:43:19.372876  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372880  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.372889  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 10:43:19.372897  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:43:19.372900  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372904  390588 command_runner.go:130] >       "size":  "29037500",
	I1213 10:43:19.372908  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.372920  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.372924  390588 command_runner.go:130] >     },
	I1213 10:43:19.372927  390588 command_runner.go:130] >     {
	I1213 10:43:19.372934  390588 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:43:19.372938  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.372943  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:43:19.372947  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372950  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.372958  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 10:43:19.372966  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 10:43:19.372970  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372973  390588 command_runner.go:130] >       "size":  "74491780",
	I1213 10:43:19.372978  390588 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:43:19.372982  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.372985  390588 command_runner.go:130] >     },
	I1213 10:43:19.372988  390588 command_runner.go:130] >     {
	I1213 10:43:19.372994  390588 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:43:19.372998  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373002  390588 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:43:19.373007  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373011  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373018  390588 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 10:43:19.373025  390588 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 10:43:19.373029  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373033  390588 command_runner.go:130] >       "size":  "60857170",
	I1213 10:43:19.373036  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373040  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.373043  390588 command_runner.go:130] >       },
	I1213 10:43:19.373052  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373056  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373059  390588 command_runner.go:130] >     },
	I1213 10:43:19.373062  390588 command_runner.go:130] >     {
	I1213 10:43:19.373070  390588 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:43:19.373078  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373083  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:43:19.373087  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373090  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373098  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 10:43:19.373110  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 10:43:19.373114  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373118  390588 command_runner.go:130] >       "size":  "84949999",
	I1213 10:43:19.373122  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373126  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.373129  390588 command_runner.go:130] >       },
	I1213 10:43:19.373132  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373136  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373139  390588 command_runner.go:130] >     },
	I1213 10:43:19.373142  390588 command_runner.go:130] >     {
	I1213 10:43:19.373148  390588 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:43:19.373151  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373157  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:43:19.373161  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373164  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373172  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 10:43:19.373181  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 10:43:19.373184  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373188  390588 command_runner.go:130] >       "size":  "72170325",
	I1213 10:43:19.373191  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373195  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.373198  390588 command_runner.go:130] >       },
	I1213 10:43:19.373202  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373206  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373208  390588 command_runner.go:130] >     },
	I1213 10:43:19.373211  390588 command_runner.go:130] >     {
	I1213 10:43:19.373218  390588 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:43:19.373222  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373230  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:43:19.373234  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373238  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373246  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 10:43:19.373253  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:43:19.373256  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373260  390588 command_runner.go:130] >       "size":  "74106775",
	I1213 10:43:19.373263  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373267  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373270  390588 command_runner.go:130] >     },
	I1213 10:43:19.373273  390588 command_runner.go:130] >     {
	I1213 10:43:19.373279  390588 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:43:19.373283  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373288  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:43:19.373291  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373295  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373303  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 10:43:19.373321  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 10:43:19.373324  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373328  390588 command_runner.go:130] >       "size":  "49822549",
	I1213 10:43:19.373331  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373336  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.373339  390588 command_runner.go:130] >       },
	I1213 10:43:19.373343  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373346  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373349  390588 command_runner.go:130] >     },
	I1213 10:43:19.373352  390588 command_runner.go:130] >     {
	I1213 10:43:19.373359  390588 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:43:19.373362  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373367  390588 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:43:19.373372  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373376  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373383  390588 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 10:43:19.373394  390588 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 10:43:19.373398  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373402  390588 command_runner.go:130] >       "size":  "519884",
	I1213 10:43:19.373405  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373409  390588 command_runner.go:130] >         "value":  "65535"
	I1213 10:43:19.373412  390588 command_runner.go:130] >       },
	I1213 10:43:19.373419  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373422  390588 command_runner.go:130] >       "pinned":  true
	I1213 10:43:19.373426  390588 command_runner.go:130] >     }
	I1213 10:43:19.373428  390588 command_runner.go:130] >   ]
	I1213 10:43:19.373432  390588 command_runner.go:130] > }
	I1213 10:43:19.375861  390588 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:43:19.375885  390588 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:43:19.375894  390588 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 10:43:19.375988  390588 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-407525 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:43:19.376071  390588 ssh_runner.go:195] Run: crio config
	I1213 10:43:19.425743  390588 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1213 10:43:19.425768  390588 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1213 10:43:19.425775  390588 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1213 10:43:19.425779  390588 command_runner.go:130] > #
	I1213 10:43:19.425787  390588 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1213 10:43:19.425793  390588 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1213 10:43:19.425801  390588 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1213 10:43:19.425810  390588 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1213 10:43:19.425814  390588 command_runner.go:130] > # reload'.
	I1213 10:43:19.425821  390588 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1213 10:43:19.425828  390588 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1213 10:43:19.425838  390588 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1213 10:43:19.425844  390588 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1213 10:43:19.425847  390588 command_runner.go:130] > [crio]
	I1213 10:43:19.425854  390588 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1213 10:43:19.425862  390588 command_runner.go:130] > # containers images, in this directory.
	I1213 10:43:19.426591  390588 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1213 10:43:19.426608  390588 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1213 10:43:19.427294  390588 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1213 10:43:19.427313  390588 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1213 10:43:19.427819  390588 command_runner.go:130] > # imagestore = ""
	I1213 10:43:19.427842  390588 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1213 10:43:19.427850  390588 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1213 10:43:19.428482  390588 command_runner.go:130] > # storage_driver = "overlay"
	I1213 10:43:19.428503  390588 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1213 10:43:19.428511  390588 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1213 10:43:19.428824  390588 command_runner.go:130] > # storage_option = [
	I1213 10:43:19.429159  390588 command_runner.go:130] > # ]
	I1213 10:43:19.429181  390588 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1213 10:43:19.429189  390588 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1213 10:43:19.429811  390588 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1213 10:43:19.429832  390588 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1213 10:43:19.429847  390588 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1213 10:43:19.429857  390588 command_runner.go:130] > # always happen on a node reboot
	I1213 10:43:19.430483  390588 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1213 10:43:19.430528  390588 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1213 10:43:19.430541  390588 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1213 10:43:19.430547  390588 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1213 10:43:19.431051  390588 command_runner.go:130] > # version_file_persist = ""
	I1213 10:43:19.431076  390588 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1213 10:43:19.431086  390588 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1213 10:43:19.431716  390588 command_runner.go:130] > # internal_wipe = true
	I1213 10:43:19.431739  390588 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1213 10:43:19.431747  390588 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1213 10:43:19.432440  390588 command_runner.go:130] > # internal_repair = true
	I1213 10:43:19.432456  390588 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1213 10:43:19.432463  390588 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1213 10:43:19.432469  390588 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1213 10:43:19.432478  390588 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1213 10:43:19.432487  390588 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1213 10:43:19.432491  390588 command_runner.go:130] > [crio.api]
	I1213 10:43:19.432496  390588 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1213 10:43:19.432503  390588 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1213 10:43:19.432512  390588 command_runner.go:130] > # IP address on which the stream server will listen.
	I1213 10:43:19.432517  390588 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1213 10:43:19.432544  390588 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1213 10:43:19.432552  390588 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1213 10:43:19.432851  390588 command_runner.go:130] > # stream_port = "0"
	I1213 10:43:19.432867  390588 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1213 10:43:19.432873  390588 command_runner.go:130] > # stream_enable_tls = false
	I1213 10:43:19.432879  390588 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1213 10:43:19.432886  390588 command_runner.go:130] > # stream_idle_timeout = ""
	I1213 10:43:19.432897  390588 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1213 10:43:19.432906  390588 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1213 10:43:19.433090  390588 command_runner.go:130] > # stream_tls_cert = ""
	I1213 10:43:19.433111  390588 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1213 10:43:19.433117  390588 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1213 10:43:19.433335  390588 command_runner.go:130] > # stream_tls_key = ""
	I1213 10:43:19.433354  390588 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1213 10:43:19.433362  390588 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1213 10:43:19.433373  390588 command_runner.go:130] > # automatically pick up the changes.
	I1213 10:43:19.433389  390588 command_runner.go:130] > # stream_tls_ca = ""
	I1213 10:43:19.433408  390588 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 10:43:19.433419  390588 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1213 10:43:19.433428  390588 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 10:43:19.433678  390588 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1213 10:43:19.433694  390588 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1213 10:43:19.433701  390588 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1213 10:43:19.433705  390588 command_runner.go:130] > [crio.runtime]
	I1213 10:43:19.433711  390588 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1213 10:43:19.433719  390588 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1213 10:43:19.433726  390588 command_runner.go:130] > # "nofile=1024:2048"
	I1213 10:43:19.433733  390588 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1213 10:43:19.433737  390588 command_runner.go:130] > # default_ulimits = [
	I1213 10:43:19.433744  390588 command_runner.go:130] > # ]
	I1213 10:43:19.433751  390588 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1213 10:43:19.433758  390588 command_runner.go:130] > # no_pivot = false
	I1213 10:43:19.433764  390588 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1213 10:43:19.433771  390588 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1213 10:43:19.433778  390588 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1213 10:43:19.433785  390588 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1213 10:43:19.433790  390588 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1213 10:43:19.433797  390588 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 10:43:19.433949  390588 command_runner.go:130] > # conmon = ""
	I1213 10:43:19.433968  390588 command_runner.go:130] > # Cgroup setting for conmon
	I1213 10:43:19.433978  390588 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1213 10:43:19.434402  390588 command_runner.go:130] > conmon_cgroup = "pod"
	I1213 10:43:19.434425  390588 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1213 10:43:19.434435  390588 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1213 10:43:19.434446  390588 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 10:43:19.434453  390588 command_runner.go:130] > # conmon_env = [
	I1213 10:43:19.434466  390588 command_runner.go:130] > # ]
	I1213 10:43:19.434472  390588 command_runner.go:130] > # Additional environment variables to set for all the
	I1213 10:43:19.434478  390588 command_runner.go:130] > # containers. These are overridden if set in the
	I1213 10:43:19.434484  390588 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1213 10:43:19.434488  390588 command_runner.go:130] > # default_env = [
	I1213 10:43:19.434491  390588 command_runner.go:130] > # ]
	I1213 10:43:19.434497  390588 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1213 10:43:19.434515  390588 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1213 10:43:19.434525  390588 command_runner.go:130] > # selinux = false
	I1213 10:43:19.434535  390588 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1213 10:43:19.434543  390588 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1213 10:43:19.434555  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.434559  390588 command_runner.go:130] > # seccomp_profile = ""
	I1213 10:43:19.434565  390588 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1213 10:43:19.434570  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.434841  390588 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1213 10:43:19.434858  390588 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1213 10:43:19.434865  390588 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1213 10:43:19.434872  390588 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1213 10:43:19.434885  390588 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1213 10:43:19.434891  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.434896  390588 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1213 10:43:19.434902  390588 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1213 10:43:19.434908  390588 command_runner.go:130] > # the cgroup blockio controller.
	I1213 10:43:19.434913  390588 command_runner.go:130] > # blockio_config_file = ""
	I1213 10:43:19.434937  390588 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1213 10:43:19.434946  390588 command_runner.go:130] > # blockio parameters.
	I1213 10:43:19.434950  390588 command_runner.go:130] > # blockio_reload = false
	I1213 10:43:19.434957  390588 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1213 10:43:19.434961  390588 command_runner.go:130] > # irqbalance daemon.
	I1213 10:43:19.434966  390588 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1213 10:43:19.434972  390588 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1213 10:43:19.434982  390588 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1213 10:43:19.434992  390588 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1213 10:43:19.435365  390588 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1213 10:43:19.435381  390588 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1213 10:43:19.435387  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.435392  390588 command_runner.go:130] > # rdt_config_file = ""
	I1213 10:43:19.435398  390588 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1213 10:43:19.435404  390588 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1213 10:43:19.435411  390588 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1213 10:43:19.435584  390588 command_runner.go:130] > # separate_pull_cgroup = ""
	I1213 10:43:19.435601  390588 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1213 10:43:19.435608  390588 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1213 10:43:19.435617  390588 command_runner.go:130] > # will be added.
	I1213 10:43:19.436649  390588 command_runner.go:130] > # default_capabilities = [
	I1213 10:43:19.436661  390588 command_runner.go:130] > # 	"CHOWN",
	I1213 10:43:19.436665  390588 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1213 10:43:19.436669  390588 command_runner.go:130] > # 	"FSETID",
	I1213 10:43:19.436673  390588 command_runner.go:130] > # 	"FOWNER",
	I1213 10:43:19.436679  390588 command_runner.go:130] > # 	"SETGID",
	I1213 10:43:19.436683  390588 command_runner.go:130] > # 	"SETUID",
	I1213 10:43:19.436708  390588 command_runner.go:130] > # 	"SETPCAP",
	I1213 10:43:19.436718  390588 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1213 10:43:19.436722  390588 command_runner.go:130] > # 	"KILL",
	I1213 10:43:19.436725  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436737  390588 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1213 10:43:19.436744  390588 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1213 10:43:19.436749  390588 command_runner.go:130] > # add_inheritable_capabilities = false
	I1213 10:43:19.436759  390588 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1213 10:43:19.436773  390588 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 10:43:19.436777  390588 command_runner.go:130] > default_sysctls = [
	I1213 10:43:19.436788  390588 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1213 10:43:19.436794  390588 command_runner.go:130] > ]
	I1213 10:43:19.436799  390588 command_runner.go:130] > # List of devices on the host that a
	I1213 10:43:19.436806  390588 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1213 10:43:19.436813  390588 command_runner.go:130] > # allowed_devices = [
	I1213 10:43:19.436817  390588 command_runner.go:130] > # 	"/dev/fuse",
	I1213 10:43:19.436820  390588 command_runner.go:130] > # 	"/dev/net/tun",
	I1213 10:43:19.436823  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436828  390588 command_runner.go:130] > # List of additional devices. specified as
	I1213 10:43:19.436836  390588 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1213 10:43:19.436842  390588 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1213 10:43:19.436850  390588 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 10:43:19.436857  390588 command_runner.go:130] > # additional_devices = [
	I1213 10:43:19.436861  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436868  390588 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1213 10:43:19.436872  390588 command_runner.go:130] > # cdi_spec_dirs = [
	I1213 10:43:19.436878  390588 command_runner.go:130] > # 	"/etc/cdi",
	I1213 10:43:19.436882  390588 command_runner.go:130] > # 	"/var/run/cdi",
	I1213 10:43:19.436888  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436895  390588 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1213 10:43:19.436904  390588 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1213 10:43:19.436908  390588 command_runner.go:130] > # Defaults to false.
	I1213 10:43:19.436913  390588 command_runner.go:130] > # device_ownership_from_security_context = false
	I1213 10:43:19.436919  390588 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1213 10:43:19.436926  390588 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1213 10:43:19.436930  390588 command_runner.go:130] > # hooks_dir = [
	I1213 10:43:19.436936  390588 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1213 10:43:19.436942  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436948  390588 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1213 10:43:19.436964  390588 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1213 10:43:19.436969  390588 command_runner.go:130] > # its default mounts from the following two files:
	I1213 10:43:19.436973  390588 command_runner.go:130] > #
	I1213 10:43:19.436981  390588 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1213 10:43:19.436992  390588 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1213 10:43:19.437001  390588 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1213 10:43:19.437008  390588 command_runner.go:130] > #
	I1213 10:43:19.437022  390588 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1213 10:43:19.437029  390588 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1213 10:43:19.437035  390588 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1213 10:43:19.437044  390588 command_runner.go:130] > #      only add mounts it finds in this file.
	I1213 10:43:19.437047  390588 command_runner.go:130] > #
	I1213 10:43:19.437051  390588 command_runner.go:130] > # default_mounts_file = ""
	I1213 10:43:19.437059  390588 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1213 10:43:19.437068  390588 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1213 10:43:19.437072  390588 command_runner.go:130] > # pids_limit = -1
	I1213 10:43:19.437078  390588 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1213 10:43:19.437087  390588 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1213 10:43:19.437094  390588 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1213 10:43:19.437104  390588 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1213 10:43:19.437110  390588 command_runner.go:130] > # log_size_max = -1
	I1213 10:43:19.437117  390588 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1213 10:43:19.437124  390588 command_runner.go:130] > # log_to_journald = false
	I1213 10:43:19.437130  390588 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1213 10:43:19.437136  390588 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1213 10:43:19.437143  390588 command_runner.go:130] > # Path to directory for container attach sockets.
	I1213 10:43:19.437149  390588 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1213 10:43:19.437160  390588 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1213 10:43:19.437164  390588 command_runner.go:130] > # bind_mount_prefix = ""
	I1213 10:43:19.437170  390588 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1213 10:43:19.437174  390588 command_runner.go:130] > # read_only = false
	I1213 10:43:19.437180  390588 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1213 10:43:19.437188  390588 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1213 10:43:19.437195  390588 command_runner.go:130] > # live configuration reload.
	I1213 10:43:19.437199  390588 command_runner.go:130] > # log_level = "info"
	I1213 10:43:19.437216  390588 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1213 10:43:19.437221  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.437232  390588 command_runner.go:130] > # log_filter = ""
	I1213 10:43:19.437241  390588 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1213 10:43:19.437248  390588 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1213 10:43:19.437252  390588 command_runner.go:130] > # separated by comma.
	I1213 10:43:19.437260  390588 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:43:19.437264  390588 command_runner.go:130] > # uid_mappings = ""
	I1213 10:43:19.437270  390588 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1213 10:43:19.437280  390588 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1213 10:43:19.437285  390588 command_runner.go:130] > # separated by comma.
	I1213 10:43:19.437295  390588 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:43:19.437301  390588 command_runner.go:130] > # gid_mappings = ""
	I1213 10:43:19.437308  390588 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1213 10:43:19.437314  390588 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 10:43:19.437320  390588 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 10:43:19.437331  390588 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:43:19.437335  390588 command_runner.go:130] > # minimum_mappable_uid = -1
	I1213 10:43:19.437345  390588 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1213 10:43:19.437354  390588 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 10:43:19.437361  390588 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 10:43:19.437371  390588 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:43:19.437375  390588 command_runner.go:130] > # minimum_mappable_gid = -1
	I1213 10:43:19.437382  390588 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1213 10:43:19.437390  390588 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1213 10:43:19.437396  390588 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1213 10:43:19.437403  390588 command_runner.go:130] > # ctr_stop_timeout = 30
	I1213 10:43:19.437409  390588 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1213 10:43:19.437416  390588 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1213 10:43:19.437423  390588 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1213 10:43:19.437428  390588 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1213 10:43:19.437432  390588 command_runner.go:130] > # drop_infra_ctr = true
	I1213 10:43:19.437441  390588 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1213 10:43:19.437449  390588 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1213 10:43:19.437457  390588 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1213 10:43:19.437473  390588 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1213 10:43:19.437482  390588 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1213 10:43:19.437491  390588 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1213 10:43:19.437497  390588 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1213 10:43:19.437502  390588 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1213 10:43:19.437506  390588 command_runner.go:130] > # shared_cpuset = ""
	I1213 10:43:19.437511  390588 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1213 10:43:19.437519  390588 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1213 10:43:19.437524  390588 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1213 10:43:19.437534  390588 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1213 10:43:19.437546  390588 command_runner.go:130] > # pinns_path = ""
	I1213 10:43:19.437553  390588 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1213 10:43:19.437560  390588 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1213 10:43:19.437567  390588 command_runner.go:130] > # enable_criu_support = true
	I1213 10:43:19.437573  390588 command_runner.go:130] > # Enable/disable the generation of the container,
	I1213 10:43:19.437579  390588 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1213 10:43:19.437586  390588 command_runner.go:130] > # enable_pod_events = false
	I1213 10:43:19.437593  390588 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 10:43:19.437598  390588 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1213 10:43:19.437604  390588 command_runner.go:130] > # default_runtime = "crun"
	I1213 10:43:19.437609  390588 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1213 10:43:19.437619  390588 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1213 10:43:19.437636  390588 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1213 10:43:19.437642  390588 command_runner.go:130] > # creation as a file is not desired either.
	I1213 10:43:19.437653  390588 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1213 10:43:19.437664  390588 command_runner.go:130] > # the hostname is being managed dynamically.
	I1213 10:43:19.437668  390588 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1213 10:43:19.437672  390588 command_runner.go:130] > # ]
	I1213 10:43:19.437678  390588 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1213 10:43:19.437685  390588 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1213 10:43:19.437693  390588 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1213 10:43:19.437708  390588 command_runner.go:130] > # Each entry in the table should follow the format:
	I1213 10:43:19.437715  390588 command_runner.go:130] > #
	I1213 10:43:19.437724  390588 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1213 10:43:19.437729  390588 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1213 10:43:19.437737  390588 command_runner.go:130] > # runtime_type = "oci"
	I1213 10:43:19.437742  390588 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1213 10:43:19.437752  390588 command_runner.go:130] > # inherit_default_runtime = false
	I1213 10:43:19.437760  390588 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1213 10:43:19.437764  390588 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1213 10:43:19.437769  390588 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1213 10:43:19.437775  390588 command_runner.go:130] > # monitor_env = []
	I1213 10:43:19.437780  390588 command_runner.go:130] > # privileged_without_host_devices = false
	I1213 10:43:19.437787  390588 command_runner.go:130] > # allowed_annotations = []
	I1213 10:43:19.437793  390588 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1213 10:43:19.437799  390588 command_runner.go:130] > # no_sync_log = false
	I1213 10:43:19.437803  390588 command_runner.go:130] > # default_annotations = {}
	I1213 10:43:19.437807  390588 command_runner.go:130] > # stream_websockets = false
	I1213 10:43:19.437810  390588 command_runner.go:130] > # seccomp_profile = ""
	I1213 10:43:19.437838  390588 command_runner.go:130] > # Where:
	I1213 10:43:19.437847  390588 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1213 10:43:19.437854  390588 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1213 10:43:19.437860  390588 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1213 10:43:19.437868  390588 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1213 10:43:19.437874  390588 command_runner.go:130] > #   in $PATH.
	I1213 10:43:19.437880  390588 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1213 10:43:19.437888  390588 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1213 10:43:19.437895  390588 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1213 10:43:19.437898  390588 command_runner.go:130] > #   state.
	I1213 10:43:19.437905  390588 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1213 10:43:19.437913  390588 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1213 10:43:19.437920  390588 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1213 10:43:19.437926  390588 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1213 10:43:19.437932  390588 command_runner.go:130] > #   the values from the default runtime on load time.
	I1213 10:43:19.437938  390588 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1213 10:43:19.437949  390588 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1213 10:43:19.437959  390588 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1213 10:43:19.437971  390588 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1213 10:43:19.437976  390588 command_runner.go:130] > #   The currently recognized values are:
	I1213 10:43:19.437983  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1213 10:43:19.437993  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1213 10:43:19.438000  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1213 10:43:19.438006  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1213 10:43:19.438017  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1213 10:43:19.438026  390588 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1213 10:43:19.438042  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1213 10:43:19.438048  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1213 10:43:19.438055  390588 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1213 10:43:19.438064  390588 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1213 10:43:19.438071  390588 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1213 10:43:19.438079  390588 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1213 10:43:19.438091  390588 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1213 10:43:19.438097  390588 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1213 10:43:19.438104  390588 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1213 10:43:19.438114  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1213 10:43:19.438123  390588 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1213 10:43:19.438128  390588 command_runner.go:130] > #   deprecated option "conmon".
	I1213 10:43:19.438135  390588 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1213 10:43:19.438145  390588 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1213 10:43:19.438153  390588 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1213 10:43:19.438160  390588 command_runner.go:130] > #   should be moved to the container's cgroup
	I1213 10:43:19.438168  390588 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1213 10:43:19.438173  390588 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1213 10:43:19.438182  390588 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1213 10:43:19.438186  390588 command_runner.go:130] > #   conmon-rs by using:
	I1213 10:43:19.438194  390588 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1213 10:43:19.438204  390588 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1213 10:43:19.438215  390588 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1213 10:43:19.438228  390588 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1213 10:43:19.438236  390588 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1213 10:43:19.438246  390588 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1213 10:43:19.438254  390588 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1213 10:43:19.438263  390588 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1213 10:43:19.438271  390588 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1213 10:43:19.438280  390588 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1213 10:43:19.438293  390588 command_runner.go:130] > #   when a machine crash happens.
	I1213 10:43:19.438300  390588 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1213 10:43:19.438308  390588 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1213 10:43:19.438322  390588 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1213 10:43:19.438327  390588 command_runner.go:130] > #   seccomp profile for the runtime.
	I1213 10:43:19.438335  390588 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1213 10:43:19.438343  390588 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1213 10:43:19.438346  390588 command_runner.go:130] > #
	I1213 10:43:19.438350  390588 command_runner.go:130] > # Using the seccomp notifier feature:
	I1213 10:43:19.438353  390588 command_runner.go:130] > #
	I1213 10:43:19.438359  390588 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1213 10:43:19.438370  390588 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1213 10:43:19.438376  390588 command_runner.go:130] > #
	I1213 10:43:19.438383  390588 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1213 10:43:19.438392  390588 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1213 10:43:19.438395  390588 command_runner.go:130] > #
	I1213 10:43:19.438401  390588 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1213 10:43:19.438406  390588 command_runner.go:130] > # feature.
	I1213 10:43:19.438410  390588 command_runner.go:130] > #
	I1213 10:43:19.438416  390588 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1213 10:43:19.438422  390588 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1213 10:43:19.438431  390588 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1213 10:43:19.438437  390588 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1213 10:43:19.438447  390588 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1213 10:43:19.438450  390588 command_runner.go:130] > #
	I1213 10:43:19.438456  390588 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1213 10:43:19.438465  390588 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1213 10:43:19.438471  390588 command_runner.go:130] > #
	I1213 10:43:19.438478  390588 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1213 10:43:19.438486  390588 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1213 10:43:19.438491  390588 command_runner.go:130] > #
	I1213 10:43:19.438497  390588 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1213 10:43:19.438512  390588 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1213 10:43:19.438516  390588 command_runner.go:130] > # limitation.
	I1213 10:43:19.438523  390588 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1213 10:43:19.438528  390588 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1213 10:43:19.438533  390588 command_runner.go:130] > runtime_type = ""
	I1213 10:43:19.438539  390588 command_runner.go:130] > runtime_root = "/run/crun"
	I1213 10:43:19.438543  390588 command_runner.go:130] > inherit_default_runtime = false
	I1213 10:43:19.438549  390588 command_runner.go:130] > runtime_config_path = ""
	I1213 10:43:19.438553  390588 command_runner.go:130] > container_min_memory = ""
	I1213 10:43:19.438560  390588 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 10:43:19.438564  390588 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 10:43:19.438577  390588 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 10:43:19.438581  390588 command_runner.go:130] > allowed_annotations = [
	I1213 10:43:19.438586  390588 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1213 10:43:19.438589  390588 command_runner.go:130] > ]
	I1213 10:43:19.438594  390588 command_runner.go:130] > privileged_without_host_devices = false
	I1213 10:43:19.438599  390588 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1213 10:43:19.438604  390588 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1213 10:43:19.438610  390588 command_runner.go:130] > runtime_type = ""
	I1213 10:43:19.438614  390588 command_runner.go:130] > runtime_root = "/run/runc"
	I1213 10:43:19.438617  390588 command_runner.go:130] > inherit_default_runtime = false
	I1213 10:43:19.438621  390588 command_runner.go:130] > runtime_config_path = ""
	I1213 10:43:19.438625  390588 command_runner.go:130] > container_min_memory = ""
	I1213 10:43:19.438633  390588 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 10:43:19.438639  390588 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 10:43:19.438644  390588 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 10:43:19.438649  390588 command_runner.go:130] > privileged_without_host_devices = false
	I1213 10:43:19.438664  390588 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1213 10:43:19.438673  390588 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1213 10:43:19.438684  390588 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1213 10:43:19.438692  390588 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1213 10:43:19.438702  390588 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1213 10:43:19.438712  390588 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1213 10:43:19.438728  390588 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1213 10:43:19.438734  390588 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1213 10:43:19.438743  390588 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1213 10:43:19.438755  390588 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1213 10:43:19.438761  390588 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1213 10:43:19.438772  390588 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1213 10:43:19.438778  390588 command_runner.go:130] > # Example:
	I1213 10:43:19.438782  390588 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1213 10:43:19.438787  390588 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1213 10:43:19.438793  390588 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1213 10:43:19.438801  390588 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1213 10:43:19.438806  390588 command_runner.go:130] > # cpuset = "0-1"
	I1213 10:43:19.438810  390588 command_runner.go:130] > # cpushares = "5"
	I1213 10:43:19.438814  390588 command_runner.go:130] > # cpuquota = "1000"
	I1213 10:43:19.438820  390588 command_runner.go:130] > # cpuperiod = "100000"
	I1213 10:43:19.438825  390588 command_runner.go:130] > # cpulimit = "35"
	I1213 10:43:19.438837  390588 command_runner.go:130] > # Where:
	I1213 10:43:19.438841  390588 command_runner.go:130] > # The workload name is workload-type.
	I1213 10:43:19.438852  390588 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1213 10:43:19.438861  390588 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1213 10:43:19.438866  390588 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1213 10:43:19.438875  390588 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1213 10:43:19.438880  390588 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1213 10:43:19.438885  390588 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1213 10:43:19.438894  390588 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1213 10:43:19.438905  390588 command_runner.go:130] > # Default value is set to true
	I1213 10:43:19.438910  390588 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1213 10:43:19.438915  390588 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1213 10:43:19.438925  390588 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1213 10:43:19.438932  390588 command_runner.go:130] > # Default value is set to 'false'
	I1213 10:43:19.438938  390588 command_runner.go:130] > # disable_hostport_mapping = false
	I1213 10:43:19.438943  390588 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1213 10:43:19.438951  390588 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1213 10:43:19.438954  390588 command_runner.go:130] > # timezone = ""
	I1213 10:43:19.438961  390588 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1213 10:43:19.438967  390588 command_runner.go:130] > #
	I1213 10:43:19.438973  390588 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1213 10:43:19.438979  390588 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1213 10:43:19.438983  390588 command_runner.go:130] > [crio.image]
	I1213 10:43:19.438993  390588 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1213 10:43:19.438999  390588 command_runner.go:130] > # default_transport = "docker://"
	I1213 10:43:19.439005  390588 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1213 10:43:19.439015  390588 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1213 10:43:19.439019  390588 command_runner.go:130] > # global_auth_file = ""
	I1213 10:43:19.439024  390588 command_runner.go:130] > # The image used to instantiate infra containers.
	I1213 10:43:19.439029  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.439034  390588 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1213 10:43:19.439040  390588 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1213 10:43:19.439048  390588 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1213 10:43:19.439055  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.439060  390588 command_runner.go:130] > # pause_image_auth_file = ""
	I1213 10:43:19.439066  390588 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1213 10:43:19.439072  390588 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1213 10:43:19.439081  390588 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1213 10:43:19.439087  390588 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1213 10:43:19.439094  390588 command_runner.go:130] > # pause_command = "/pause"
	I1213 10:43:19.439100  390588 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1213 10:43:19.439106  390588 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1213 10:43:19.439111  390588 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1213 10:43:19.439117  390588 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1213 10:43:19.439123  390588 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1213 10:43:19.439134  390588 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1213 10:43:19.439142  390588 command_runner.go:130] > # pinned_images = [
	I1213 10:43:19.439145  390588 command_runner.go:130] > # ]
	I1213 10:43:19.439151  390588 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1213 10:43:19.439157  390588 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1213 10:43:19.439166  390588 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1213 10:43:19.439172  390588 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1213 10:43:19.439180  390588 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1213 10:43:19.439184  390588 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1213 10:43:19.439190  390588 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1213 10:43:19.439197  390588 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1213 10:43:19.439203  390588 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1213 10:43:19.439209  390588 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1213 10:43:19.439223  390588 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1213 10:43:19.439228  390588 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1213 10:43:19.439234  390588 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1213 10:43:19.439243  390588 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1213 10:43:19.439247  390588 command_runner.go:130] > # changing them here.
	I1213 10:43:19.439253  390588 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1213 10:43:19.439260  390588 command_runner.go:130] > # insecure_registries = [
	I1213 10:43:19.439263  390588 command_runner.go:130] > # ]
	I1213 10:43:19.439268  390588 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1213 10:43:19.439273  390588 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1213 10:43:19.439723  390588 command_runner.go:130] > # image_volumes = "mkdir"
	I1213 10:43:19.439741  390588 command_runner.go:130] > # Temporary directory to use for storing big files
	I1213 10:43:19.439879  390588 command_runner.go:130] > # big_files_temporary_dir = ""
	I1213 10:43:19.439918  390588 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1213 10:43:19.439927  390588 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1213 10:43:19.439931  390588 command_runner.go:130] > # auto_reload_registries = false
	I1213 10:43:19.439937  390588 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1213 10:43:19.439946  390588 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1213 10:43:19.439958  390588 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1213 10:43:19.439963  390588 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1213 10:43:19.439974  390588 command_runner.go:130] > # The mode of short name resolution.
	I1213 10:43:19.439985  390588 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1213 10:43:19.439993  390588 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1213 10:43:19.440002  390588 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1213 10:43:19.440006  390588 command_runner.go:130] > # short_name_mode = "enforcing"
	I1213 10:43:19.440012  390588 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1213 10:43:19.440018  390588 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1213 10:43:19.440023  390588 command_runner.go:130] > # oci_artifact_mount_support = true
	I1213 10:43:19.440029  390588 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1213 10:43:19.440034  390588 command_runner.go:130] > # CNI plugins.
	I1213 10:43:19.440037  390588 command_runner.go:130] > [crio.network]
	I1213 10:43:19.440044  390588 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1213 10:43:19.440053  390588 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1213 10:43:19.440058  390588 command_runner.go:130] > # cni_default_network = ""
	I1213 10:43:19.440064  390588 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1213 10:43:19.440073  390588 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1213 10:43:19.440080  390588 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1213 10:43:19.440084  390588 command_runner.go:130] > # plugin_dirs = [
	I1213 10:43:19.440211  390588 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1213 10:43:19.440357  390588 command_runner.go:130] > # ]
	I1213 10:43:19.440384  390588 command_runner.go:130] > # List of included pod metrics.
	I1213 10:43:19.440392  390588 command_runner.go:130] > # included_pod_metrics = [
	I1213 10:43:19.440401  390588 command_runner.go:130] > # ]
	I1213 10:43:19.440408  390588 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1213 10:43:19.440418  390588 command_runner.go:130] > [crio.metrics]
	I1213 10:43:19.440423  390588 command_runner.go:130] > # Globally enable or disable metrics support.
	I1213 10:43:19.440436  390588 command_runner.go:130] > # enable_metrics = false
	I1213 10:43:19.440441  390588 command_runner.go:130] > # Specify enabled metrics collectors.
	I1213 10:43:19.440446  390588 command_runner.go:130] > # Per default all metrics are enabled.
	I1213 10:43:19.440452  390588 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1213 10:43:19.440460  390588 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1213 10:43:19.440472  390588 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1213 10:43:19.440477  390588 command_runner.go:130] > # metrics_collectors = [
	I1213 10:43:19.440481  390588 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1213 10:43:19.440496  390588 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1213 10:43:19.440501  390588 command_runner.go:130] > # 	"containers_oom_total",
	I1213 10:43:19.440506  390588 command_runner.go:130] > # 	"processes_defunct",
	I1213 10:43:19.440509  390588 command_runner.go:130] > # 	"operations_total",
	I1213 10:43:19.440637  390588 command_runner.go:130] > # 	"operations_latency_seconds",
	I1213 10:43:19.440664  390588 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1213 10:43:19.440670  390588 command_runner.go:130] > # 	"operations_errors_total",
	I1213 10:43:19.440688  390588 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1213 10:43:19.440696  390588 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1213 10:43:19.440701  390588 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1213 10:43:19.440705  390588 command_runner.go:130] > # 	"image_pulls_success_total",
	I1213 10:43:19.440716  390588 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1213 10:43:19.440720  390588 command_runner.go:130] > # 	"containers_oom_count_total",
	I1213 10:43:19.440726  390588 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1213 10:43:19.440734  390588 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1213 10:43:19.440739  390588 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1213 10:43:19.440742  390588 command_runner.go:130] > # ]
	I1213 10:43:19.440749  390588 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1213 10:43:19.440758  390588 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1213 10:43:19.440764  390588 command_runner.go:130] > # The port on which the metrics server will listen.
	I1213 10:43:19.440768  390588 command_runner.go:130] > # metrics_port = 9090
	I1213 10:43:19.440773  390588 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1213 10:43:19.440901  390588 command_runner.go:130] > # metrics_socket = ""
	I1213 10:43:19.440915  390588 command_runner.go:130] > # The certificate for the secure metrics server.
	I1213 10:43:19.440937  390588 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1213 10:43:19.440950  390588 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1213 10:43:19.440955  390588 command_runner.go:130] > # certificate on any modification event.
	I1213 10:43:19.440959  390588 command_runner.go:130] > # metrics_cert = ""
	I1213 10:43:19.440964  390588 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1213 10:43:19.440969  390588 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1213 10:43:19.440972  390588 command_runner.go:130] > # metrics_key = ""
	I1213 10:43:19.440978  390588 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1213 10:43:19.440982  390588 command_runner.go:130] > [crio.tracing]
	I1213 10:43:19.440995  390588 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1213 10:43:19.441000  390588 command_runner.go:130] > # enable_tracing = false
	I1213 10:43:19.441006  390588 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1213 10:43:19.441015  390588 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1213 10:43:19.441022  390588 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1213 10:43:19.441031  390588 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1213 10:43:19.441039  390588 command_runner.go:130] > # CRI-O NRI configuration.
	I1213 10:43:19.441042  390588 command_runner.go:130] > [crio.nri]
	I1213 10:43:19.441047  390588 command_runner.go:130] > # Globally enable or disable NRI.
	I1213 10:43:19.441253  390588 command_runner.go:130] > # enable_nri = true
	I1213 10:43:19.441268  390588 command_runner.go:130] > # NRI socket to listen on.
	I1213 10:43:19.441274  390588 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1213 10:43:19.441278  390588 command_runner.go:130] > # NRI plugin directory to use.
	I1213 10:43:19.441283  390588 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1213 10:43:19.441288  390588 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1213 10:43:19.441293  390588 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1213 10:43:19.441298  390588 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1213 10:43:19.441355  390588 command_runner.go:130] > # nri_disable_connections = false
	I1213 10:43:19.441365  390588 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1213 10:43:19.441370  390588 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1213 10:43:19.441374  390588 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1213 10:43:19.441379  390588 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1213 10:43:19.441384  390588 command_runner.go:130] > # NRI default validator configuration.
	I1213 10:43:19.441391  390588 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1213 10:43:19.441401  390588 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1213 10:43:19.441405  390588 command_runner.go:130] > # can be restricted/rejected:
	I1213 10:43:19.441417  390588 command_runner.go:130] > # - OCI hook injection
	I1213 10:43:19.441427  390588 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1213 10:43:19.441435  390588 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1213 10:43:19.441440  390588 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1213 10:43:19.441444  390588 command_runner.go:130] > # - adjustment of linux namespaces
	I1213 10:43:19.441453  390588 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1213 10:43:19.441460  390588 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1213 10:43:19.441466  390588 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1213 10:43:19.441469  390588 command_runner.go:130] > #
	I1213 10:43:19.441473  390588 command_runner.go:130] > # [crio.nri.default_validator]
	I1213 10:43:19.441480  390588 command_runner.go:130] > # nri_enable_default_validator = false
	I1213 10:43:19.441485  390588 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1213 10:43:19.441629  390588 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1213 10:43:19.441658  390588 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1213 10:43:19.441671  390588 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1213 10:43:19.441677  390588 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1213 10:43:19.441685  390588 command_runner.go:130] > # nri_validator_required_plugins = [
	I1213 10:43:19.441688  390588 command_runner.go:130] > # ]
	I1213 10:43:19.441694  390588 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1213 10:43:19.441700  390588 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1213 10:43:19.441709  390588 command_runner.go:130] > [crio.stats]
	I1213 10:43:19.441720  390588 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1213 10:43:19.441730  390588 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1213 10:43:19.441734  390588 command_runner.go:130] > # stats_collection_period = 0
	I1213 10:43:19.441743  390588 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1213 10:43:19.441752  390588 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1213 10:43:19.441756  390588 command_runner.go:130] > # collection_period = 0
	I1213 10:43:19.443275  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.403988128Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1213 10:43:19.443305  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404025092Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1213 10:43:19.443315  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404051931Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1213 10:43:19.443326  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404076596Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1213 10:43:19.443340  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404148548Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:19.443352  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404414955Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1213 10:43:19.443364  390588 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1213 10:43:19.443836  390588 cni.go:84] Creating CNI manager for ""
	I1213 10:43:19.443854  390588 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:43:19.443875  390588 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:43:19.443898  390588 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-407525 NodeName:functional-407525 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:43:19.444025  390588 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-407525"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:43:19.444095  390588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:43:19.450891  390588 command_runner.go:130] > kubeadm
	I1213 10:43:19.450967  390588 command_runner.go:130] > kubectl
	I1213 10:43:19.450987  390588 command_runner.go:130] > kubelet
	I1213 10:43:19.451803  390588 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:43:19.451864  390588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:43:19.459352  390588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 10:43:19.471938  390588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:43:19.485136  390588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 10:43:19.498010  390588 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:43:19.501925  390588 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 10:43:19.502045  390588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:43:19.620049  390588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:43:20.022042  390588 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525 for IP: 192.168.49.2
	I1213 10:43:20.022188  390588 certs.go:195] generating shared ca certs ...
	I1213 10:43:20.022221  390588 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:43:20.022446  390588 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 10:43:20.022567  390588 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 10:43:20.022606  390588 certs.go:257] generating profile certs ...
	I1213 10:43:20.022771  390588 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key
	I1213 10:43:20.022893  390588 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key.2185ee04
	I1213 10:43:20.023000  390588 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key
	I1213 10:43:20.023048  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 10:43:20.023081  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 10:43:20.023123  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 10:43:20.023158  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 10:43:20.023202  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 10:43:20.023238  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 10:43:20.023279  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 10:43:20.023318  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 10:43:20.023431  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 10:43:20.023496  390588 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 10:43:20.023540  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:43:20.023607  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:43:20.023670  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:43:20.023728  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 10:43:20.023828  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 10:43:20.023897  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.023941  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem -> /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.023985  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.024591  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:43:20.049939  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:43:20.071962  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:43:20.093520  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:43:20.117621  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:43:20.135349  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:43:20.152883  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:43:20.170121  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 10:43:20.188254  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:43:20.205892  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 10:43:20.223561  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 10:43:20.241467  390588 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:43:20.254691  390588 ssh_runner.go:195] Run: openssl version
	I1213 10:43:20.260777  390588 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 10:43:20.261193  390588 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.268769  390588 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 10:43:20.276440  390588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.280293  390588 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.280332  390588 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.280379  390588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.320848  390588 command_runner.go:130] > 3ec20f2e
	I1213 10:43:20.321296  390588 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:43:20.328708  390588 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.335901  390588 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:43:20.343392  390588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.347019  390588 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.347264  390588 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.347323  390588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.388019  390588 command_runner.go:130] > b5213941
	I1213 10:43:20.388604  390588 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:43:20.396066  390588 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.403389  390588 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 10:43:20.410914  390588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.414772  390588 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.414823  390588 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.414888  390588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.455731  390588 command_runner.go:130] > 51391683
	I1213 10:43:20.456248  390588 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:43:20.463583  390588 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:43:20.467136  390588 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:43:20.467160  390588 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 10:43:20.467167  390588 command_runner.go:130] > Device: 259,1	Inode: 1322536     Links: 1
	I1213 10:43:20.467174  390588 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:43:20.467180  390588 command_runner.go:130] > Access: 2025-12-13 10:39:12.482590700 +0000
	I1213 10:43:20.467186  390588 command_runner.go:130] > Modify: 2025-12-13 10:35:08.216365089 +0000
	I1213 10:43:20.467191  390588 command_runner.go:130] > Change: 2025-12-13 10:35:08.216365089 +0000
	I1213 10:43:20.467197  390588 command_runner.go:130] >  Birth: 2025-12-13 10:35:08.216365089 +0000
	I1213 10:43:20.467264  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:43:20.507794  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.508276  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:43:20.549373  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.549450  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:43:20.591501  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.592041  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:43:20.633163  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.633239  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:43:20.673681  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.674235  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:43:20.714863  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.715372  390588 kubeadm.go:401] StartCluster: {Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:43:20.715472  390588 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:43:20.715572  390588 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:43:20.742591  390588 cri.go:89] found id: ""
	I1213 10:43:20.742663  390588 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:43:20.749676  390588 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 10:43:20.749696  390588 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 10:43:20.749703  390588 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 10:43:20.750605  390588 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:43:20.750650  390588 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:43:20.750723  390588 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:43:20.758246  390588 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:43:20.758662  390588 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-407525" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:20.758765  390588 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-354468/kubeconfig needs updating (will repair): [kubeconfig missing "functional-407525" cluster setting kubeconfig missing "functional-407525" context setting]
	I1213 10:43:20.759076  390588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:43:20.759474  390588 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:20.759724  390588 kapi.go:59] client config for functional-407525: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key", CAFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:43:20.760259  390588 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 10:43:20.760282  390588 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 10:43:20.760289  390588 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 10:43:20.760294  390588 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 10:43:20.760299  390588 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 10:43:20.760595  390588 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:43:20.760675  390588 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 10:43:20.768313  390588 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 10:43:20.768394  390588 kubeadm.go:602] duration metric: took 17.723293ms to restartPrimaryControlPlane
	I1213 10:43:20.768419  390588 kubeadm.go:403] duration metric: took 53.05457ms to StartCluster
	I1213 10:43:20.768469  390588 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:43:20.768581  390588 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:20.769195  390588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:43:20.769470  390588 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:43:20.769730  390588 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:43:20.769792  390588 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:43:20.769868  390588 addons.go:70] Setting storage-provisioner=true in profile "functional-407525"
	I1213 10:43:20.769887  390588 addons.go:239] Setting addon storage-provisioner=true in "functional-407525"
	I1213 10:43:20.769967  390588 host.go:66] Checking if "functional-407525" exists ...
	I1213 10:43:20.770424  390588 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:43:20.770582  390588 addons.go:70] Setting default-storageclass=true in profile "functional-407525"
	I1213 10:43:20.770602  390588 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-407525"
	I1213 10:43:20.770845  390588 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:43:20.776047  390588 out.go:179] * Verifying Kubernetes components...
	I1213 10:43:20.778873  390588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:43:20.803376  390588 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:43:20.806823  390588 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:20.806848  390588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:43:20.806911  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:20.815503  390588 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:20.815748  390588 kapi.go:59] client config for functional-407525: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key", CAFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:43:20.816048  390588 addons.go:239] Setting addon default-storageclass=true in "functional-407525"
	I1213 10:43:20.816085  390588 host.go:66] Checking if "functional-407525" exists ...
	I1213 10:43:20.816499  390588 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:43:20.849236  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:20.860497  390588 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:20.860524  390588 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:43:20.860587  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:20.893135  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:20.991835  390588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:43:21.017033  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:21.050080  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:21.773497  390588 node_ready.go:35] waiting up to 6m0s for node "functional-407525" to be "Ready" ...
	I1213 10:43:21.773656  390588 type.go:168] "Request Body" body=""
	I1213 10:43:21.773729  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:21.774009  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:21.774035  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:21.774063  390588 retry.go:31] will retry after 178.71376ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:21.774107  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:21.774121  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:21.774127  390588 retry.go:31] will retry after 267.498ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:21.774194  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:21.953713  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:22.014320  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.018022  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.018057  390588 retry.go:31] will retry after 328.520116ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.042240  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:22.097866  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.101425  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.101460  390588 retry.go:31] will retry after 340.23882ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.273721  390588 type.go:168] "Request Body" body=""
	I1213 10:43:22.273821  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:22.274173  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:22.347588  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:22.405090  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.408724  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.408759  390588 retry.go:31] will retry after 330.053163ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.441890  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:22.497250  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.500831  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.500864  390588 retry.go:31] will retry after 301.657591ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.739051  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:22.774467  390588 type.go:168] "Request Body" body=""
	I1213 10:43:22.774545  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:22.774882  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:22.796776  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.800408  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.800485  390588 retry.go:31] will retry after 1.110001612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.803607  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:22.863746  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.863797  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.863816  390588 retry.go:31] will retry after 925.323482ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:23.274339  390588 type.go:168] "Request Body" body=""
	I1213 10:43:23.274464  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:23.274793  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:23.774657  390588 type.go:168] "Request Body" body=""
	I1213 10:43:23.774742  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:23.775115  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:23.775193  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:23.789322  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:23.850165  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:23.853613  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:23.853701  390588 retry.go:31] will retry after 1.468677433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:23.910870  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:23.967004  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:23.970690  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:23.970723  390588 retry.go:31] will retry after 1.30336677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:24.274187  390588 type.go:168] "Request Body" body=""
	I1213 10:43:24.274270  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:24.274613  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:24.773719  390588 type.go:168] "Request Body" body=""
	I1213 10:43:24.773812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:24.774104  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:25.273868  390588 type.go:168] "Request Body" body=""
	I1213 10:43:25.273973  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:25.274299  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:25.274422  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:25.322752  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:25.335088  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:25.335126  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:25.335146  390588 retry.go:31] will retry after 1.31175111s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:25.389173  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:25.389228  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:25.389247  390588 retry.go:31] will retry after 1.937290048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:25.773818  390588 type.go:168] "Request Body" body=""
	I1213 10:43:25.773896  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:25.774238  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:26.274714  390588 type.go:168] "Request Body" body=""
	I1213 10:43:26.274790  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:26.275116  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:26.275175  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:26.647823  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:26.708762  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:26.708815  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:26.708835  390588 retry.go:31] will retry after 2.338895321s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:26.773966  390588 type.go:168] "Request Body" body=""
	I1213 10:43:26.774052  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:26.774373  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:27.273820  390588 type.go:168] "Request Body" body=""
	I1213 10:43:27.273894  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:27.274223  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:27.327657  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:27.389087  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:27.389124  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:27.389154  390588 retry.go:31] will retry after 3.77996712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:27.774250  390588 type.go:168] "Request Body" body=""
	I1213 10:43:27.774347  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:27.774610  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:28.274520  390588 type.go:168] "Request Body" body=""
	I1213 10:43:28.274639  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:28.275025  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:28.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:43:28.773830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:28.774175  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:28.774230  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:29.048671  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:29.108913  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:29.108956  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:29.108976  390588 retry.go:31] will retry after 6.196055786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:29.274133  390588 type.go:168] "Request Body" body=""
	I1213 10:43:29.274210  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:29.274535  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:29.774410  390588 type.go:168] "Request Body" body=""
	I1213 10:43:29.774493  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:29.774856  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:30.274678  390588 type.go:168] "Request Body" body=""
	I1213 10:43:30.274752  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:30.275098  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:30.774546  390588 type.go:168] "Request Body" body=""
	I1213 10:43:30.774615  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:30.774881  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:30.774922  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:31.169380  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:31.223779  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:31.227282  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:31.227315  390588 retry.go:31] will retry after 4.701439473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:31.274644  390588 type.go:168] "Request Body" body=""
	I1213 10:43:31.274723  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:31.275035  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:31.773748  390588 type.go:168] "Request Body" body=""
	I1213 10:43:31.773838  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:31.774143  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:32.273714  390588 type.go:168] "Request Body" body=""
	I1213 10:43:32.273813  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:32.274119  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:32.773782  390588 type.go:168] "Request Body" body=""
	I1213 10:43:32.773855  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:32.774160  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:33.273748  390588 type.go:168] "Request Body" body=""
	I1213 10:43:33.273823  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:33.274181  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:33.274234  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:33.773733  390588 type.go:168] "Request Body" body=""
	I1213 10:43:33.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:33.774115  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:34.273812  390588 type.go:168] "Request Body" body=""
	I1213 10:43:34.273904  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:34.274296  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:34.773742  390588 type.go:168] "Request Body" body=""
	I1213 10:43:34.773818  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:34.774139  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:35.273828  390588 type.go:168] "Request Body" body=""
	I1213 10:43:35.273922  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:35.274192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:35.305578  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:35.371590  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:35.371636  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:35.371657  390588 retry.go:31] will retry after 5.458500829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:35.773766  390588 type.go:168] "Request Body" body=""
	I1213 10:43:35.773846  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:35.774186  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:35.774236  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:35.929536  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:35.989448  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:35.989487  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:35.989506  390588 retry.go:31] will retry after 5.007301518s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:36.274095  390588 type.go:168] "Request Body" body=""
	I1213 10:43:36.274168  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:36.274462  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:36.774043  390588 type.go:168] "Request Body" body=""
	I1213 10:43:36.774126  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:36.774417  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:37.273790  390588 type.go:168] "Request Body" body=""
	I1213 10:43:37.273882  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:37.274210  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:37.773915  390588 type.go:168] "Request Body" body=""
	I1213 10:43:37.773996  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:37.774325  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:37.774386  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:38.274036  390588 type.go:168] "Request Body" body=""
	I1213 10:43:38.274110  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:38.274365  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:38.773780  390588 type.go:168] "Request Body" body=""
	I1213 10:43:38.773871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:38.774179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:39.273872  390588 type.go:168] "Request Body" body=""
	I1213 10:43:39.273948  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:39.274270  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:39.773709  390588 type.go:168] "Request Body" body=""
	I1213 10:43:39.773784  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:39.774053  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:40.273825  390588 type.go:168] "Request Body" body=""
	I1213 10:43:40.273899  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:40.274244  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:40.274309  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:40.774007  390588 type.go:168] "Request Body" body=""
	I1213 10:43:40.774083  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:40.774431  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:40.830857  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:40.888820  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:40.888869  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:40.888889  390588 retry.go:31] will retry after 11.437774943s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:40.997102  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:41.058447  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:41.058511  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:41.058532  390588 retry.go:31] will retry after 7.34875984s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:41.275648  390588 type.go:168] "Request Body" body=""
	I1213 10:43:41.275736  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:41.275995  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:41.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:43:41.773833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:41.774173  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:42.273927  390588 type.go:168] "Request Body" body=""
	I1213 10:43:42.274020  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:42.274372  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:42.274432  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:42.773693  390588 type.go:168] "Request Body" body=""
	I1213 10:43:42.773768  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:42.774092  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:43.273808  390588 type.go:168] "Request Body" body=""
	I1213 10:43:43.273880  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:43.274204  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:43.773920  390588 type.go:168] "Request Body" body=""
	I1213 10:43:43.774021  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:43.774340  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:44.274591  390588 type.go:168] "Request Body" body=""
	I1213 10:43:44.274666  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:44.274925  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:44.274974  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:44.773692  390588 type.go:168] "Request Body" body=""
	I1213 10:43:44.773775  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:44.774117  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:45.273902  390588 type.go:168] "Request Body" body=""
	I1213 10:43:45.273985  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:45.274305  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:45.773737  390588 type.go:168] "Request Body" body=""
	I1213 10:43:45.773808  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:45.774115  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:46.273797  390588 type.go:168] "Request Body" body=""
	I1213 10:43:46.273879  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:46.274217  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:46.774024  390588 type.go:168] "Request Body" body=""
	I1213 10:43:46.774120  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:46.774453  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:46.774515  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:47.273671  390588 type.go:168] "Request Body" body=""
	I1213 10:43:47.273742  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:47.274050  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:47.773764  390588 type.go:168] "Request Body" body=""
	I1213 10:43:47.773857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:47.774219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:48.273933  390588 type.go:168] "Request Body" body=""
	I1213 10:43:48.274033  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:48.274397  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:48.407754  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:48.470395  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:48.474021  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:48.474053  390588 retry.go:31] will retry after 19.108505533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:48.774398  390588 type.go:168] "Request Body" body=""
	I1213 10:43:48.774473  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:48.774751  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:48.774803  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:49.274554  390588 type.go:168] "Request Body" body=""
	I1213 10:43:49.274627  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:49.274988  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:49.773726  390588 type.go:168] "Request Body" body=""
	I1213 10:43:49.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:49.774191  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:50.273886  390588 type.go:168] "Request Body" body=""
	I1213 10:43:50.273967  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:50.274244  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:50.774213  390588 type.go:168] "Request Body" body=""
	I1213 10:43:50.774312  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:50.774666  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:51.274525  390588 type.go:168] "Request Body" body=""
	I1213 10:43:51.274611  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:51.274924  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:51.274971  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:51.774634  390588 type.go:168] "Request Body" body=""
	I1213 10:43:51.774715  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:51.774977  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:52.273715  390588 type.go:168] "Request Body" body=""
	I1213 10:43:52.273797  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:52.274174  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:52.327551  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:52.388989  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:52.389038  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:52.389058  390588 retry.go:31] will retry after 15.332526016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:52.774665  390588 type.go:168] "Request Body" body=""
	I1213 10:43:52.774747  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:52.775066  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:53.273766  390588 type.go:168] "Request Body" body=""
	I1213 10:43:53.273838  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:53.274095  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:53.773791  390588 type.go:168] "Request Body" body=""
	I1213 10:43:53.773894  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:53.774202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:53.774258  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:54.273942  390588 type.go:168] "Request Body" body=""
	I1213 10:43:54.274024  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:54.274379  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:54.774619  390588 type.go:168] "Request Body" body=""
	I1213 10:43:54.774685  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:54.774981  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:55.273695  390588 type.go:168] "Request Body" body=""
	I1213 10:43:55.273772  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:55.274098  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:55.774730  390588 type.go:168] "Request Body" body=""
	I1213 10:43:55.774809  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:55.775152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:55.775209  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:56.273860  390588 type.go:168] "Request Body" body=""
	I1213 10:43:56.273937  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:56.274197  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:56.773778  390588 type.go:168] "Request Body" body=""
	I1213 10:43:56.773872  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:56.774188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:57.273779  390588 type.go:168] "Request Body" body=""
	I1213 10:43:57.273871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:57.274186  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:57.774399  390588 type.go:168] "Request Body" body=""
	I1213 10:43:57.774475  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:57.774745  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:58.274628  390588 type.go:168] "Request Body" body=""
	I1213 10:43:58.274703  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:58.275023  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:58.275075  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:58.773728  390588 type.go:168] "Request Body" body=""
	I1213 10:43:58.773808  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:58.774138  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:59.274411  390588 type.go:168] "Request Body" body=""
	I1213 10:43:59.274483  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:59.274749  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:59.774554  390588 type.go:168] "Request Body" body=""
	I1213 10:43:59.774628  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:59.774978  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:00.273734  390588 type.go:168] "Request Body" body=""
	I1213 10:44:00.273827  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:00.274198  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:00.774634  390588 type.go:168] "Request Body" body=""
	I1213 10:44:00.774714  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:00.775059  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:00.775121  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:01.273670  390588 type.go:168] "Request Body" body=""
	I1213 10:44:01.273742  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:01.274061  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:01.773708  390588 type.go:168] "Request Body" body=""
	I1213 10:44:01.773778  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:01.774062  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:02.273799  390588 type.go:168] "Request Body" body=""
	I1213 10:44:02.273872  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:02.274204  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:02.773760  390588 type.go:168] "Request Body" body=""
	I1213 10:44:02.773840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:02.774185  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:03.273713  390588 type.go:168] "Request Body" body=""
	I1213 10:44:03.273804  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:03.274108  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:03.274159  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:03.773781  390588 type.go:168] "Request Body" body=""
	I1213 10:44:03.773856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:03.774368  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:04.273809  390588 type.go:168] "Request Body" body=""
	I1213 10:44:04.273910  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:04.274228  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:04.773901  390588 type.go:168] "Request Body" body=""
	I1213 10:44:04.773977  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:04.774242  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:05.273787  390588 type.go:168] "Request Body" body=""
	I1213 10:44:05.273861  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:05.274193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:05.274252  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:05.773910  390588 type.go:168] "Request Body" body=""
	I1213 10:44:05.774005  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:05.774314  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:06.274302  390588 type.go:168] "Request Body" body=""
	I1213 10:44:06.274372  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:06.274644  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:06.774485  390588 type.go:168] "Request Body" body=""
	I1213 10:44:06.774567  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:06.774982  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:07.273730  390588 type.go:168] "Request Body" body=""
	I1213 10:44:07.273828  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:07.274146  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:07.583825  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:44:07.646535  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:07.646580  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:07.646600  390588 retry.go:31] will retry after 14.697551715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:07.722798  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:44:07.774314  390588 type.go:168] "Request Body" body=""
	I1213 10:44:07.774386  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:07.774682  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:07.774739  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:07.791129  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:07.791173  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:07.791194  390588 retry.go:31] will retry after 13.531528334s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:08.273899  390588 type.go:168] "Request Body" body=""
	I1213 10:44:08.273980  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:08.274336  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:08.774067  390588 type.go:168] "Request Body" body=""
	I1213 10:44:08.774147  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:08.774508  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:09.274290  390588 type.go:168] "Request Body" body=""
	I1213 10:44:09.274369  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:09.274678  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:09.774447  390588 type.go:168] "Request Body" body=""
	I1213 10:44:09.774528  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:09.774864  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:09.774936  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:10.274570  390588 type.go:168] "Request Body" body=""
	I1213 10:44:10.274657  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:10.274961  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:10.774562  390588 type.go:168] "Request Body" body=""
	I1213 10:44:10.774642  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:10.774915  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:11.273679  390588 type.go:168] "Request Body" body=""
	I1213 10:44:11.273789  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:11.274110  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:11.773783  390588 type.go:168] "Request Body" body=""
	I1213 10:44:11.773865  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:11.774164  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:12.273719  390588 type.go:168] "Request Body" body=""
	I1213 10:44:12.273786  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:12.274058  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:12.274098  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:12.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:44:12.773833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:12.774136  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:13.273776  390588 type.go:168] "Request Body" body=""
	I1213 10:44:13.273875  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:13.274215  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:13.773721  390588 type.go:168] "Request Body" body=""
	I1213 10:44:13.773787  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:13.774066  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:14.273794  390588 type.go:168] "Request Body" body=""
	I1213 10:44:14.273871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:14.274227  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:14.274283  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:14.773929  390588 type.go:168] "Request Body" body=""
	I1213 10:44:14.774010  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:14.774363  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:15.273657  390588 type.go:168] "Request Body" body=""
	I1213 10:44:15.273724  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:15.273985  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:15.773757  390588 type.go:168] "Request Body" body=""
	I1213 10:44:15.773863  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:15.774190  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:16.274139  390588 type.go:168] "Request Body" body=""
	I1213 10:44:16.274221  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:16.274567  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:16.274622  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:16.774305  390588 type.go:168] "Request Body" body=""
	I1213 10:44:16.774378  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:16.774644  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:17.274446  390588 type.go:168] "Request Body" body=""
	I1213 10:44:17.274528  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:17.274866  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:17.774497  390588 type.go:168] "Request Body" body=""
	I1213 10:44:17.774575  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:17.774899  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:18.274657  390588 type.go:168] "Request Body" body=""
	I1213 10:44:18.274734  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:18.275051  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:18.275096  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:18.773787  390588 type.go:168] "Request Body" body=""
	I1213 10:44:18.773872  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:18.774209  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:19.273910  390588 type.go:168] "Request Body" body=""
	I1213 10:44:19.273985  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:19.274345  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:19.774026  390588 type.go:168] "Request Body" body=""
	I1213 10:44:19.774099  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:19.774355  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:20.273801  390588 type.go:168] "Request Body" body=""
	I1213 10:44:20.273913  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:20.274223  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:20.773981  390588 type.go:168] "Request Body" body=""
	I1213 10:44:20.774053  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:20.774366  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:20.774423  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:21.274357  390588 type.go:168] "Request Body" body=""
	I1213 10:44:21.274428  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:21.274706  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:21.323061  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:44:21.389635  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:21.389682  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:21.389701  390588 retry.go:31] will retry after 37.789083594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:21.773791  390588 type.go:168] "Request Body" body=""
	I1213 10:44:21.773876  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:21.774224  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:22.273915  390588 type.go:168] "Request Body" body=""
	I1213 10:44:22.273997  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:22.274345  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:22.344570  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:44:22.405449  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:22.405493  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:22.405512  390588 retry.go:31] will retry after 23.725920264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:22.773711  390588 type.go:168] "Request Body" body=""
	I1213 10:44:22.773782  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:22.774033  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:23.273757  390588 type.go:168] "Request Body" body=""
	I1213 10:44:23.273859  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:23.274206  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:23.274261  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:23.773694  390588 type.go:168] "Request Body" body=""
	I1213 10:44:23.773766  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:23.774054  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:24.274441  390588 type.go:168] "Request Body" body=""
	I1213 10:44:24.274518  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:24.274774  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:24.774608  390588 type.go:168] "Request Body" body=""
	I1213 10:44:24.774678  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:24.774999  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:25.274658  390588 type.go:168] "Request Body" body=""
	I1213 10:44:25.274733  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:25.275077  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:25.275131  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:25.774431  390588 type.go:168] "Request Body" body=""
	I1213 10:44:25.774508  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:25.774773  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:26.274739  390588 type.go:168] "Request Body" body=""
	I1213 10:44:26.274817  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:26.275144  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:26.773790  390588 type.go:168] "Request Body" body=""
	I1213 10:44:26.773863  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:26.774173  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:27.274455  390588 type.go:168] "Request Body" body=""
	I1213 10:44:27.274547  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:27.274811  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:27.774572  390588 type.go:168] "Request Body" body=""
	I1213 10:44:27.774642  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:27.774952  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:27.775003  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:28.274705  390588 type.go:168] "Request Body" body=""
	I1213 10:44:28.274777  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:28.275087  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:28.773642  390588 type.go:168] "Request Body" body=""
	I1213 10:44:28.773716  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:28.773982  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:29.273745  390588 type.go:168] "Request Body" body=""
	I1213 10:44:29.273822  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:29.274155  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:29.773835  390588 type.go:168] "Request Body" body=""
	I1213 10:44:29.773917  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:29.774248  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:30.274557  390588 type.go:168] "Request Body" body=""
	I1213 10:44:30.274641  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:30.274916  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:30.274971  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:30.774540  390588 type.go:168] "Request Body" body=""
	I1213 10:44:30.774632  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:30.774962  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:31.273679  390588 type.go:168] "Request Body" body=""
	I1213 10:44:31.273750  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:31.274077  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:31.774321  390588 type.go:168] "Request Body" body=""
	I1213 10:44:31.774386  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:31.774707  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:32.274525  390588 type.go:168] "Request Body" body=""
	I1213 10:44:32.274604  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:32.274936  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:32.274993  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:32.774698  390588 type.go:168] "Request Body" body=""
	I1213 10:44:32.774804  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:32.775108  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:33.274456  390588 type.go:168] "Request Body" body=""
	I1213 10:44:33.274529  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:33.274787  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:33.774581  390588 type.go:168] "Request Body" body=""
	I1213 10:44:33.774664  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:33.775008  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:34.274708  390588 type.go:168] "Request Body" body=""
	I1213 10:44:34.274794  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:34.275152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:34.275214  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:34.773858  390588 type.go:168] "Request Body" body=""
	I1213 10:44:34.773932  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:34.774188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:35.273780  390588 type.go:168] "Request Body" body=""
	I1213 10:44:35.273867  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:35.274233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:35.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:44:35.773852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:35.774179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:36.273930  390588 type.go:168] "Request Body" body=""
	I1213 10:44:36.274033  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:36.274307  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:36.773735  390588 type.go:168] "Request Body" body=""
	I1213 10:44:36.773807  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:36.774161  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:36.774233  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:37.273748  390588 type.go:168] "Request Body" body=""
	I1213 10:44:37.273822  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:37.274140  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:37.774404  390588 type.go:168] "Request Body" body=""
	I1213 10:44:37.774471  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:37.774822  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:38.274598  390588 type.go:168] "Request Body" body=""
	I1213 10:44:38.274669  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:38.274999  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:38.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:44:38.773807  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:38.774142  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:39.274495  390588 type.go:168] "Request Body" body=""
	I1213 10:44:39.274562  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:39.274851  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:39.274908  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:39.774657  390588 type.go:168] "Request Body" body=""
	I1213 10:44:39.774730  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:39.775049  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:40.273772  390588 type.go:168] "Request Body" body=""
	I1213 10:44:40.273847  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:40.274166  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:40.774227  390588 type.go:168] "Request Body" body=""
	I1213 10:44:40.774300  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:40.774572  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:41.274605  390588 type.go:168] "Request Body" body=""
	I1213 10:44:41.274676  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:41.275014  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:41.275084  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:41.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:44:41.773824  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:41.774152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:42.273842  390588 type.go:168] "Request Body" body=""
	I1213 10:44:42.273921  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:42.274231  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:42.773931  390588 type.go:168] "Request Body" body=""
	I1213 10:44:42.774027  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:42.774383  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:43.273973  390588 type.go:168] "Request Body" body=""
	I1213 10:44:43.274062  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:43.274409  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:43.773648  390588 type.go:168] "Request Body" body=""
	I1213 10:44:43.773733  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:43.773987  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:43.774033  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:44.273702  390588 type.go:168] "Request Body" body=""
	I1213 10:44:44.273808  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:44.274146  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:44.773881  390588 type.go:168] "Request Body" body=""
	I1213 10:44:44.773958  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:44.774291  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:45.273983  390588 type.go:168] "Request Body" body=""
	I1213 10:44:45.274063  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:45.274356  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:45.773766  390588 type.go:168] "Request Body" body=""
	I1213 10:44:45.773844  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:45.774176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:45.774231  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:46.131654  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:44:46.194295  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:46.194358  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:46.194451  390588 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:44:46.274603  390588 type.go:168] "Request Body" body=""
	I1213 10:44:46.274700  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:46.275072  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:46.774037  390588 type.go:168] "Request Body" body=""
	I1213 10:44:46.774112  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:46.774387  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:47.273782  390588 type.go:168] "Request Body" body=""
	I1213 10:44:47.273858  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:47.274208  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:47.773755  390588 type.go:168] "Request Body" body=""
	I1213 10:44:47.773830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:47.774174  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:48.273867  390588 type.go:168] "Request Body" body=""
	I1213 10:44:48.273936  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:48.274200  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:48.274241  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:48.773790  390588 type.go:168] "Request Body" body=""
	I1213 10:44:48.773871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:48.774229  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:49.273767  390588 type.go:168] "Request Body" body=""
	I1213 10:44:49.273849  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:49.274193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:49.774519  390588 type.go:168] "Request Body" body=""
	I1213 10:44:49.774595  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:49.774926  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:50.274705  390588 type.go:168] "Request Body" body=""
	I1213 10:44:50.274774  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:50.275102  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:50.275164  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:50.774065  390588 type.go:168] "Request Body" body=""
	I1213 10:44:50.774140  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:50.774471  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:51.274252  390588 type.go:168] "Request Body" body=""
	I1213 10:44:51.274326  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:51.274605  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:51.774340  390588 type.go:168] "Request Body" body=""
	I1213 10:44:51.774416  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:51.774757  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:52.274427  390588 type.go:168] "Request Body" body=""
	I1213 10:44:52.274511  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:52.274882  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:52.774600  390588 type.go:168] "Request Body" body=""
	I1213 10:44:52.774673  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:52.774919  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:52.774958  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:53.274692  390588 type.go:168] "Request Body" body=""
	I1213 10:44:53.274773  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:53.275105  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:53.773804  390588 type.go:168] "Request Body" body=""
	I1213 10:44:53.773878  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:53.774208  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:54.273740  390588 type.go:168] "Request Body" body=""
	I1213 10:44:54.273826  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:54.274090  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:54.773755  390588 type.go:168] "Request Body" body=""
	I1213 10:44:54.773834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:54.774176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:55.273871  390588 type.go:168] "Request Body" body=""
	I1213 10:44:55.273946  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:55.274266  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:55.274336  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:55.773682  390588 type.go:168] "Request Body" body=""
	I1213 10:44:55.773752  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:55.773998  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:56.273698  390588 type.go:168] "Request Body" body=""
	I1213 10:44:56.273771  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:56.274097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:56.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:44:56.773832  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:56.774157  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:57.273838  390588 type.go:168] "Request Body" body=""
	I1213 10:44:57.273924  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:57.274176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:57.773806  390588 type.go:168] "Request Body" body=""
	I1213 10:44:57.773928  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:57.774296  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:57.774354  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:58.273798  390588 type.go:168] "Request Body" body=""
	I1213 10:44:58.273873  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:58.274218  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:58.774470  390588 type.go:168] "Request Body" body=""
	I1213 10:44:58.774560  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:58.774811  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:59.179566  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:44:59.239921  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:59.239971  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:59.240057  390588 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:44:59.247585  390588 out.go:179] * Enabled addons: 
	I1213 10:44:59.249608  390588 addons.go:530] duration metric: took 1m38.479812026s for enable addons: enabled=[]
	I1213 10:44:59.274157  390588 type.go:168] "Request Body" body=""
	I1213 10:44:59.274255  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:59.274564  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:59.774339  390588 type.go:168] "Request Body" body=""
	I1213 10:44:59.774421  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:59.774764  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:59.774833  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:00.278749  390588 type.go:168] "Request Body" body=""
	I1213 10:45:00.278833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:00.279163  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:00.774212  390588 type.go:168] "Request Body" body=""
	I1213 10:45:00.774297  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:00.774688  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:01.274508  390588 type.go:168] "Request Body" body=""
	I1213 10:45:01.274605  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:01.274894  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:01.774686  390588 type.go:168] "Request Body" body=""
	I1213 10:45:01.774765  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:01.775087  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:01.775143  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:02.273808  390588 type.go:168] "Request Body" body=""
	I1213 10:45:02.273892  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:02.274240  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:02.773795  390588 type.go:168] "Request Body" body=""
	I1213 10:45:02.773879  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:02.774138  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:03.273769  390588 type.go:168] "Request Body" body=""
	I1213 10:45:03.273860  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:03.274233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:03.773792  390588 type.go:168] "Request Body" body=""
	I1213 10:45:03.773881  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:03.774233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:04.273949  390588 type.go:168] "Request Body" body=""
	I1213 10:45:04.274036  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:04.274352  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:04.274418  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:04.773787  390588 type.go:168] "Request Body" body=""
	I1213 10:45:04.773869  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:04.774175  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:05.273787  390588 type.go:168] "Request Body" body=""
	I1213 10:45:05.273859  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:05.274192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:05.773881  390588 type.go:168] "Request Body" body=""
	I1213 10:45:05.773957  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:05.774210  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:06.273726  390588 type.go:168] "Request Body" body=""
	I1213 10:45:06.273802  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:06.274127  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:06.773770  390588 type.go:168] "Request Body" body=""
	I1213 10:45:06.773852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:06.774202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:06.774260  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:07.273760  390588 type.go:168] "Request Body" body=""
	I1213 10:45:07.273836  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:07.274400  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:07.773790  390588 type.go:168] "Request Body" body=""
	I1213 10:45:07.773866  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:07.774207  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:08.273795  390588 type.go:168] "Request Body" body=""
	I1213 10:45:08.273920  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:08.274303  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:08.773655  390588 type.go:168] "Request Body" body=""
	I1213 10:45:08.773725  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:08.773989  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:09.273678  390588 type.go:168] "Request Body" body=""
	I1213 10:45:09.273758  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:09.274098  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:09.274153  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:09.773807  390588 type.go:168] "Request Body" body=""
	I1213 10:45:09.773902  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:09.774222  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:10.273946  390588 type.go:168] "Request Body" body=""
	I1213 10:45:10.274017  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:10.274269  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:10.774276  390588 type.go:168] "Request Body" body=""
	I1213 10:45:10.774349  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:10.774733  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:11.274712  390588 type.go:168] "Request Body" body=""
	I1213 10:45:11.274783  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:11.275094  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:11.275143  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:11.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:45:11.773801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:11.774126  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:12.273826  390588 type.go:168] "Request Body" body=""
	I1213 10:45:12.273930  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:12.274257  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:12.773940  390588 type.go:168] "Request Body" body=""
	I1213 10:45:12.774025  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:12.774370  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:13.273711  390588 type.go:168] "Request Body" body=""
	I1213 10:45:13.273799  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:13.274065  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:13.773788  390588 type.go:168] "Request Body" body=""
	I1213 10:45:13.773869  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:13.774187  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:13.774240  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:14.273793  390588 type.go:168] "Request Body" body=""
	I1213 10:45:14.273953  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:14.274293  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:14.773991  390588 type.go:168] "Request Body" body=""
	I1213 10:45:14.774073  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:14.774396  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:15.273772  390588 type.go:168] "Request Body" body=""
	I1213 10:45:15.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:15.274164  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:15.773820  390588 type.go:168] "Request Body" body=""
	I1213 10:45:15.773895  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:15.774219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:15.774280  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:16.274172  390588 type.go:168] "Request Body" body=""
	I1213 10:45:16.274247  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:16.280111  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1213 10:45:16.773739  390588 type.go:168] "Request Body" body=""
	I1213 10:45:16.773818  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:16.774141  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:17.273780  390588 type.go:168] "Request Body" body=""
	I1213 10:45:17.273862  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:17.274194  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:17.773721  390588 type.go:168] "Request Body" body=""
	I1213 10:45:17.773798  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:17.774048  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:18.273782  390588 type.go:168] "Request Body" body=""
	I1213 10:45:18.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:18.274213  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:18.274286  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:18.773986  390588 type.go:168] "Request Body" body=""
	I1213 10:45:18.774078  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:18.774398  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:19.273725  390588 type.go:168] "Request Body" body=""
	I1213 10:45:19.273802  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:19.274082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:19.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:45:19.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:19.774130  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:20.274061  390588 type.go:168] "Request Body" body=""
	I1213 10:45:20.274147  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:20.274521  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:20.274567  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:20.774429  390588 type.go:168] "Request Body" body=""
	I1213 10:45:20.774513  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:20.774784  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:21.274708  390588 type.go:168] "Request Body" body=""
	I1213 10:45:21.274788  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:21.275140  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:21.773809  390588 type.go:168] "Request Body" body=""
	I1213 10:45:21.773886  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:21.774230  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:22.273923  390588 type.go:168] "Request Body" body=""
	I1213 10:45:22.273995  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:22.274330  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:22.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:45:22.773836  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:22.774196  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:22.774266  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:23.273752  390588 type.go:168] "Request Body" body=""
	I1213 10:45:23.273825  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:23.274153  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:23.773854  390588 type.go:168] "Request Body" body=""
	I1213 10:45:23.773925  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:23.774184  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:24.273760  390588 type.go:168] "Request Body" body=""
	I1213 10:45:24.273837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:24.274228  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:24.773775  390588 type.go:168] "Request Body" body=""
	I1213 10:45:24.773852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:24.774188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:25.273932  390588 type.go:168] "Request Body" body=""
	I1213 10:45:25.274007  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:25.274270  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:25.274311  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:25.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:45:25.773835  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:25.774178  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:26.273929  390588 type.go:168] "Request Body" body=""
	I1213 10:45:26.274023  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:26.274342  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:26.774676  390588 type.go:168] "Request Body" body=""
	I1213 10:45:26.774744  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:26.774995  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:27.273699  390588 type.go:168] "Request Body" body=""
	I1213 10:45:27.273783  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:27.274109  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:27.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:45:27.773826  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:27.774163  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:27.774227  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:28.273715  390588 type.go:168] "Request Body" body=""
	I1213 10:45:28.273788  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:28.274057  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:28.773741  390588 type.go:168] "Request Body" body=""
	I1213 10:45:28.773816  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:28.774148  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:29.273858  390588 type.go:168] "Request Body" body=""
	I1213 10:45:29.273934  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:29.274250  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:29.773725  390588 type.go:168] "Request Body" body=""
	I1213 10:45:29.773794  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:29.774055  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:30.273773  390588 type.go:168] "Request Body" body=""
	I1213 10:45:30.273852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:30.274199  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:30.274260  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:30.774238  390588 type.go:168] "Request Body" body=""
	I1213 10:45:30.774312  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:30.774643  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:31.274550  390588 type.go:168] "Request Body" body=""
	I1213 10:45:31.274624  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:31.274882  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:31.774665  390588 type.go:168] "Request Body" body=""
	I1213 10:45:31.774738  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:31.775064  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:32.273753  390588 type.go:168] "Request Body" body=""
	I1213 10:45:32.273830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:32.274149  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:32.773762  390588 type.go:168] "Request Body" body=""
	I1213 10:45:32.773830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:32.774109  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:32.774151  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:33.273762  390588 type.go:168] "Request Body" body=""
	I1213 10:45:33.273841  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:33.274135  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:33.773816  390588 type.go:168] "Request Body" body=""
	I1213 10:45:33.773892  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:33.774227  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:34.274572  390588 type.go:168] "Request Body" body=""
	I1213 10:45:34.274643  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:34.274903  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:34.774657  390588 type.go:168] "Request Body" body=""
	I1213 10:45:34.774729  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:34.775082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:34.775152  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:35.273670  390588 type.go:168] "Request Body" body=""
	I1213 10:45:35.273759  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:35.274117  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:35.774407  390588 type.go:168] "Request Body" body=""
	I1213 10:45:35.774479  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:35.774771  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:36.274663  390588 type.go:168] "Request Body" body=""
	I1213 10:45:36.274756  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:36.275065  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:36.773806  390588 type.go:168] "Request Body" body=""
	I1213 10:45:36.773912  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:36.774265  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:37.273706  390588 type.go:168] "Request Body" body=""
	I1213 10:45:37.273778  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:37.274054  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:37.274104  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:37.773740  390588 type.go:168] "Request Body" body=""
	I1213 10:45:37.773842  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:37.774182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:38.273888  390588 type.go:168] "Request Body" body=""
	I1213 10:45:38.273961  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:38.274293  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:38.773975  390588 type.go:168] "Request Body" body=""
	I1213 10:45:38.774042  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:38.774302  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:39.273778  390588 type.go:168] "Request Body" body=""
	I1213 10:45:39.273861  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:39.274199  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:39.274262  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:39.773743  390588 type.go:168] "Request Body" body=""
	I1213 10:45:39.773824  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:39.774184  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:40.273728  390588 type.go:168] "Request Body" body=""
	I1213 10:45:40.273827  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:40.274144  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:40.774643  390588 type.go:168] "Request Body" body=""
	I1213 10:45:40.774717  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:40.775033  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:41.273691  390588 type.go:168] "Request Body" body=""
	I1213 10:45:41.273765  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:41.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:41.774405  390588 type.go:168] "Request Body" body=""
	I1213 10:45:41.774475  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:41.774789  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:41.774848  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:42.274590  390588 type.go:168] "Request Body" body=""
	I1213 10:45:42.274665  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:42.275006  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:42.773699  390588 type.go:168] "Request Body" body=""
	I1213 10:45:42.773775  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:42.774116  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:43.274417  390588 type.go:168] "Request Body" body=""
	I1213 10:45:43.274505  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:43.274764  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:43.774491  390588 type.go:168] "Request Body" body=""
	I1213 10:45:43.774561  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:43.774931  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:43.774985  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:44.274631  390588 type.go:168] "Request Body" body=""
	I1213 10:45:44.274716  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:44.275082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:44.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:45:44.773832  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:44.774086  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:45.273789  390588 type.go:168] "Request Body" body=""
	I1213 10:45:45.273877  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:45.274215  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:45.773938  390588 type.go:168] "Request Body" body=""
	I1213 10:45:45.774016  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:45.774370  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:46.274211  390588 type.go:168] "Request Body" body=""
	I1213 10:45:46.274311  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:46.274593  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:46.274641  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:46.774347  390588 type.go:168] "Request Body" body=""
	I1213 10:45:46.774423  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:46.774786  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:47.274591  390588 type.go:168] "Request Body" body=""
	I1213 10:45:47.274695  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:47.275064  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:47.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:45:47.773821  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:47.774076  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:48.273791  390588 type.go:168] "Request Body" body=""
	I1213 10:45:48.273871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:48.274221  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:48.773944  390588 type.go:168] "Request Body" body=""
	I1213 10:45:48.774025  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:48.774340  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:48.774398  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:49.273717  390588 type.go:168] "Request Body" body=""
	I1213 10:45:49.273796  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:49.274115  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:49.773760  390588 type.go:168] "Request Body" body=""
	I1213 10:45:49.773837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:49.774152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:50.273796  390588 type.go:168] "Request Body" body=""
	I1213 10:45:50.273881  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:50.274202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:50.774153  390588 type.go:168] "Request Body" body=""
	I1213 10:45:50.774227  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:50.774498  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:50.774547  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:51.274578  390588 type.go:168] "Request Body" body=""
	I1213 10:45:51.274657  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:51.274980  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:51.773696  390588 type.go:168] "Request Body" body=""
	I1213 10:45:51.773772  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:51.774097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:52.273712  390588 type.go:168] "Request Body" body=""
	I1213 10:45:52.273783  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:52.274044  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:52.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:45:52.773841  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:52.774214  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:53.273940  390588 type.go:168] "Request Body" body=""
	I1213 10:45:53.274028  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:53.274362  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:53.274420  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:53.773716  390588 type.go:168] "Request Body" body=""
	I1213 10:45:53.773788  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:53.774109  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:54.273804  390588 type.go:168] "Request Body" body=""
	I1213 10:45:54.273880  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:54.274211  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:54.773918  390588 type.go:168] "Request Body" body=""
	I1213 10:45:54.773996  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:54.774325  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:55.273749  390588 type.go:168] "Request Body" body=""
	I1213 10:45:55.273858  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:55.274197  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:55.773750  390588 type.go:168] "Request Body" body=""
	I1213 10:45:55.773829  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:55.774176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:55.774229  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:56.273954  390588 type.go:168] "Request Body" body=""
	I1213 10:45:56.274030  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:56.274368  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:56.774597  390588 type.go:168] "Request Body" body=""
	I1213 10:45:56.774681  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:56.775019  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:57.273757  390588 type.go:168] "Request Body" body=""
	I1213 10:45:57.273833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:57.274167  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:57.773886  390588 type.go:168] "Request Body" body=""
	I1213 10:45:57.773969  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:57.774297  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:57.774351  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:58.274008  390588 type.go:168] "Request Body" body=""
	I1213 10:45:58.274074  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:58.274328  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:58.773754  390588 type.go:168] "Request Body" body=""
	I1213 10:45:58.773845  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:58.774179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:59.273755  390588 type.go:168] "Request Body" body=""
	I1213 10:45:59.273831  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:59.274152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:59.773661  390588 type.go:168] "Request Body" body=""
	I1213 10:45:59.773729  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:59.773978  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:00.273779  390588 type.go:168] "Request Body" body=""
	I1213 10:46:00.273870  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:00.274207  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:00.274265  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:00.774194  390588 type.go:168] "Request Body" body=""
	I1213 10:46:00.774271  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:00.774577  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:01.274425  390588 type.go:168] "Request Body" body=""
	I1213 10:46:01.274499  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:01.274770  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:01.774648  390588 type.go:168] "Request Body" body=""
	I1213 10:46:01.774734  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:01.775108  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:02.273787  390588 type.go:168] "Request Body" body=""
	I1213 10:46:02.273866  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:02.274202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:02.773686  390588 type.go:168] "Request Body" body=""
	I1213 10:46:02.773753  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:02.774020  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:02.774062  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:03.273812  390588 type.go:168] "Request Body" body=""
	I1213 10:46:03.273890  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:03.274214  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:03.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:46:03.773844  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:03.774182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:04.274309  390588 type.go:168] "Request Body" body=""
	I1213 10:46:04.274379  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:04.274657  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:04.774430  390588 type.go:168] "Request Body" body=""
	I1213 10:46:04.774509  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:04.774864  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:04.774924  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:05.274540  390588 type.go:168] "Request Body" body=""
	I1213 10:46:05.274616  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:05.274963  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:05.773676  390588 type.go:168] "Request Body" body=""
	I1213 10:46:05.773758  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:05.774085  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:06.273969  390588 type.go:168] "Request Body" body=""
	I1213 10:46:06.274052  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:06.274459  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:06.773811  390588 type.go:168] "Request Body" body=""
	I1213 10:46:06.773902  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:06.774273  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:07.274619  390588 type.go:168] "Request Body" body=""
	I1213 10:46:07.274708  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:07.274974  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:07.275017  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:07.773671  390588 type.go:168] "Request Body" body=""
	I1213 10:46:07.773768  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:07.774117  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:08.273847  390588 type.go:168] "Request Body" body=""
	I1213 10:46:08.273925  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:08.274261  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:08.773957  390588 type.go:168] "Request Body" body=""
	I1213 10:46:08.774035  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:08.774397  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:09.273804  390588 type.go:168] "Request Body" body=""
	I1213 10:46:09.273894  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:09.274256  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:09.773968  390588 type.go:168] "Request Body" body=""
	I1213 10:46:09.774044  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:09.774403  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:09.774460  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:10.273719  390588 type.go:168] "Request Body" body=""
	I1213 10:46:10.273805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:10.274080  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:10.774136  390588 type.go:168] "Request Body" body=""
	I1213 10:46:10.774210  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:10.774536  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:11.274519  390588 type.go:168] "Request Body" body=""
	I1213 10:46:11.274594  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:11.274918  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:11.774397  390588 type.go:168] "Request Body" body=""
	I1213 10:46:11.774468  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:11.774832  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:11.774891  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:12.274659  390588 type.go:168] "Request Body" body=""
	I1213 10:46:12.274757  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:12.275082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:12.773782  390588 type.go:168] "Request Body" body=""
	I1213 10:46:12.773863  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:12.774233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:13.273921  390588 type.go:168] "Request Body" body=""
	I1213 10:46:13.273994  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:13.274258  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:13.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:46:13.773843  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:13.774234  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:14.273963  390588 type.go:168] "Request Body" body=""
	I1213 10:46:14.274066  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:14.274415  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:14.274474  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:14.773715  390588 type.go:168] "Request Body" body=""
	I1213 10:46:14.773793  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:14.774125  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:15.273806  390588 type.go:168] "Request Body" body=""
	I1213 10:46:15.273885  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:15.274220  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:15.773837  390588 type.go:168] "Request Body" body=""
	I1213 10:46:15.773921  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:15.774333  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:16.274096  390588 type.go:168] "Request Body" body=""
	I1213 10:46:16.274165  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:16.274517  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:16.274565  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:16.774276  390588 type.go:168] "Request Body" body=""
	I1213 10:46:16.774356  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:16.774701  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:17.274489  390588 type.go:168] "Request Body" body=""
	I1213 10:46:17.274563  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:17.274929  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:17.773641  390588 type.go:168] "Request Body" body=""
	I1213 10:46:17.773710  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:17.773957  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:18.274732  390588 type.go:168] "Request Body" body=""
	I1213 10:46:18.274812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:18.275153  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:18.275207  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:18.773906  390588 type.go:168] "Request Body" body=""
	I1213 10:46:18.773982  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:18.774326  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:19.274430  390588 type.go:168] "Request Body" body=""
	I1213 10:46:19.274528  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:19.274794  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:19.774601  390588 type.go:168] "Request Body" body=""
	I1213 10:46:19.774671  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:19.775003  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:20.273724  390588 type.go:168] "Request Body" body=""
	I1213 10:46:20.273806  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:20.274129  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:20.774125  390588 type.go:168] "Request Body" body=""
	I1213 10:46:20.774196  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:20.774577  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:20.774628  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:21.274424  390588 type.go:168] "Request Body" body=""
	I1213 10:46:21.274514  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:21.274834  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:21.774531  390588 type.go:168] "Request Body" body=""
	I1213 10:46:21.774612  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:21.774944  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:22.274640  390588 type.go:168] "Request Body" body=""
	I1213 10:46:22.274709  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:22.275021  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:22.774663  390588 type.go:168] "Request Body" body=""
	I1213 10:46:22.774773  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:22.775134  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:22.775197  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:23.273890  390588 type.go:168] "Request Body" body=""
	I1213 10:46:23.273971  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:23.274309  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:23.773717  390588 type.go:168] "Request Body" body=""
	I1213 10:46:23.773786  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:23.774083  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:24.273734  390588 type.go:168] "Request Body" body=""
	I1213 10:46:24.273813  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:24.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:24.773781  390588 type.go:168] "Request Body" body=""
	I1213 10:46:24.773855  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:24.774193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:25.274593  390588 type.go:168] "Request Body" body=""
	I1213 10:46:25.274667  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:25.274932  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:25.274974  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:25.773688  390588 type.go:168] "Request Body" body=""
	I1213 10:46:25.773769  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:25.774103  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:26.273715  390588 type.go:168] "Request Body" body=""
	I1213 10:46:26.273799  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:26.274187  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:26.773723  390588 type.go:168] "Request Body" body=""
	I1213 10:46:26.773803  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:26.774134  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:27.273777  390588 type.go:168] "Request Body" body=""
	I1213 10:46:27.273856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:27.274211  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:27.773942  390588 type.go:168] "Request Body" body=""
	I1213 10:46:27.774024  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:27.774376  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:27.774430  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:28.274709  390588 type.go:168] "Request Body" body=""
	I1213 10:46:28.274789  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:28.275064  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:28.773835  390588 type.go:168] "Request Body" body=""
	I1213 10:46:28.773920  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:28.774272  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:29.273759  390588 type.go:168] "Request Body" body=""
	I1213 10:46:29.273840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:29.274176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:29.774348  390588 type.go:168] "Request Body" body=""
	I1213 10:46:29.774419  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:29.774764  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:29.774820  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:30.274620  390588 type.go:168] "Request Body" body=""
	I1213 10:46:30.274696  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:30.275046  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:30.774640  390588 type.go:168] "Request Body" body=""
	I1213 10:46:30.774719  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:30.775077  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:31.273951  390588 type.go:168] "Request Body" body=""
	I1213 10:46:31.274026  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:31.274287  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:31.773775  390588 type.go:168] "Request Body" body=""
	I1213 10:46:31.773856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:31.774181  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:32.273795  390588 type.go:168] "Request Body" body=""
	I1213 10:46:32.273869  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:32.274211  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:32.274272  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:32.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:46:32.773801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:32.774050  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:33.273763  390588 type.go:168] "Request Body" body=""
	I1213 10:46:33.273841  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:33.274191  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:33.773932  390588 type.go:168] "Request Body" body=""
	I1213 10:46:33.774017  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:33.774448  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:34.273707  390588 type.go:168] "Request Body" body=""
	I1213 10:46:34.273777  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:34.274033  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:34.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:46:34.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:34.774164  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:34.774219  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:35.273760  390588 type.go:168] "Request Body" body=""
	I1213 10:46:35.273839  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:35.274188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:35.773757  390588 type.go:168] "Request Body" body=""
	I1213 10:46:35.773834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:35.774091  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:36.273704  390588 type.go:168] "Request Body" body=""
	I1213 10:46:36.273807  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:36.274146  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:36.773734  390588 type.go:168] "Request Body" body=""
	I1213 10:46:36.773812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:36.774138  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:37.273719  390588 type.go:168] "Request Body" body=""
	I1213 10:46:37.273806  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:37.274055  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:37.274109  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:37.773773  390588 type.go:168] "Request Body" body=""
	I1213 10:46:37.773850  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:37.774167  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:38.273869  390588 type.go:168] "Request Body" body=""
	I1213 10:46:38.273941  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:38.274257  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:38.774621  390588 type.go:168] "Request Body" body=""
	I1213 10:46:38.774711  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:38.774971  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:39.273720  390588 type.go:168] "Request Body" body=""
	I1213 10:46:39.273795  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:39.274130  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:39.274185  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:39.773882  390588 type.go:168] "Request Body" body=""
	I1213 10:46:39.773961  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:39.774280  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:40.273738  390588 type.go:168] "Request Body" body=""
	I1213 10:46:40.273832  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:40.274158  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:40.774749  390588 type.go:168] "Request Body" body=""
	I1213 10:46:40.774834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:40.775222  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:41.273940  390588 type.go:168] "Request Body" body=""
	I1213 10:46:41.274026  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:41.274347  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:41.274405  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:41.774636  390588 type.go:168] "Request Body" body=""
	I1213 10:46:41.774701  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:41.774952  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:42.273730  390588 type.go:168] "Request Body" body=""
	I1213 10:46:42.273828  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:42.274210  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:42.773953  390588 type.go:168] "Request Body" body=""
	I1213 10:46:42.774038  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:42.774405  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:43.274638  390588 type.go:168] "Request Body" body=""
	I1213 10:46:43.274705  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:43.274978  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:43.275016  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:43.773701  390588 type.go:168] "Request Body" body=""
	I1213 10:46:43.773806  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:43.774143  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:44.273888  390588 type.go:168] "Request Body" body=""
	I1213 10:46:44.273989  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:44.274363  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:44.774070  390588 type.go:168] "Request Body" body=""
	I1213 10:46:44.774138  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:44.774399  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:45.273823  390588 type.go:168] "Request Body" body=""
	I1213 10:46:45.273898  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:45.274268  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:45.773995  390588 type.go:168] "Request Body" body=""
	I1213 10:46:45.774070  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:45.774394  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:45.774448  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:46.274246  390588 type.go:168] "Request Body" body=""
	I1213 10:46:46.274313  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:46.274596  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:46.774345  390588 type.go:168] "Request Body" body=""
	I1213 10:46:46.774417  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:46.774765  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:47.274423  390588 type.go:168] "Request Body" body=""
	I1213 10:46:47.274522  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:47.274846  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:47.774170  390588 type.go:168] "Request Body" body=""
	I1213 10:46:47.774241  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:47.774544  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:47.774600  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:48.274170  390588 type.go:168] "Request Body" body=""
	I1213 10:46:48.274257  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:48.274614  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:48.774460  390588 type.go:168] "Request Body" body=""
	I1213 10:46:48.774547  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:48.774903  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:49.274601  390588 type.go:168] "Request Body" body=""
	I1213 10:46:49.274681  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:49.274964  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:49.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:46:49.773817  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:49.774156  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:50.273855  390588 type.go:168] "Request Body" body=""
	I1213 10:46:50.273935  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:50.274285  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:50.274341  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:50.774135  390588 type.go:168] "Request Body" body=""
	I1213 10:46:50.774202  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:50.774454  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:51.274467  390588 type.go:168] "Request Body" body=""
	I1213 10:46:51.274552  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:51.274884  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:51.774669  390588 type.go:168] "Request Body" body=""
	I1213 10:46:51.774754  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:51.775052  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:52.273723  390588 type.go:168] "Request Body" body=""
	I1213 10:46:52.273794  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:52.274094  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:52.773761  390588 type.go:168] "Request Body" body=""
	I1213 10:46:52.773837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:52.774189  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:52.774245  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:53.273910  390588 type.go:168] "Request Body" body=""
	I1213 10:46:53.273985  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:53.274313  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:53.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:46:53.773801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:53.774114  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:54.273799  390588 type.go:168] "Request Body" body=""
	I1213 10:46:54.273883  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:54.274242  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:54.773831  390588 type.go:168] "Request Body" body=""
	I1213 10:46:54.773908  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:54.774273  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:54.774330  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:55.273935  390588 type.go:168] "Request Body" body=""
	I1213 10:46:55.274002  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:55.274280  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:55.773763  390588 type.go:168] "Request Body" body=""
	I1213 10:46:55.773841  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:55.774166  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:56.273719  390588 type.go:168] "Request Body" body=""
	I1213 10:46:56.273793  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:56.274128  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:56.774284  390588 type.go:168] "Request Body" body=""
	I1213 10:46:56.774353  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:56.774609  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:56.774649  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:57.274349  390588 type.go:168] "Request Body" body=""
	I1213 10:46:57.274429  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:57.274756  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:57.774568  390588 type.go:168] "Request Body" body=""
	I1213 10:46:57.774644  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:57.774981  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:58.274491  390588 type.go:168] "Request Body" body=""
	I1213 10:46:58.274570  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:58.274873  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:58.774677  390588 type.go:168] "Request Body" body=""
	I1213 10:46:58.774750  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:58.775093  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:58.775146  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:59.273671  390588 type.go:168] "Request Body" body=""
	I1213 10:46:59.273746  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:59.274092  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:59.773709  390588 type.go:168] "Request Body" body=""
	I1213 10:46:59.773787  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:59.774109  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:00.273858  390588 type.go:168] "Request Body" body=""
	I1213 10:47:00.273965  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:00.274284  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:00.774431  390588 type.go:168] "Request Body" body=""
	I1213 10:47:00.774530  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:00.774877  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:01.273680  390588 type.go:168] "Request Body" body=""
	I1213 10:47:01.273746  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:01.274056  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:01.274104  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:01.773802  390588 type.go:168] "Request Body" body=""
	I1213 10:47:01.773895  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:01.774231  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:02.273805  390588 type.go:168] "Request Body" body=""
	I1213 10:47:02.273883  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:02.274188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:02.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:47:02.773820  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:02.774149  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:03.273795  390588 type.go:168] "Request Body" body=""
	I1213 10:47:03.273876  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:03.274215  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:03.274268  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:03.773789  390588 type.go:168] "Request Body" body=""
	I1213 10:47:03.773879  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:03.774219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:04.274436  390588 type.go:168] "Request Body" body=""
	I1213 10:47:04.274533  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:04.274808  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:04.774597  390588 type.go:168] "Request Body" body=""
	I1213 10:47:04.774676  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:04.775027  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:05.273736  390588 type.go:168] "Request Body" body=""
	I1213 10:47:05.273815  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:05.274179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:05.773856  390588 type.go:168] "Request Body" body=""
	I1213 10:47:05.773934  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:05.774190  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:05.774242  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:06.273720  390588 type.go:168] "Request Body" body=""
	I1213 10:47:06.273796  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:06.274139  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:06.773856  390588 type.go:168] "Request Body" body=""
	I1213 10:47:06.773936  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:06.774268  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:07.274469  390588 type.go:168] "Request Body" body=""
	I1213 10:47:07.274550  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:07.274856  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:07.774641  390588 type.go:168] "Request Body" body=""
	I1213 10:47:07.774724  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:07.775047  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:07.775098  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:08.273769  390588 type.go:168] "Request Body" body=""
	I1213 10:47:08.273853  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:08.274179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:08.773674  390588 type.go:168] "Request Body" body=""
	I1213 10:47:08.773747  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:08.773993  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:09.273756  390588 type.go:168] "Request Body" body=""
	I1213 10:47:09.273885  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:09.274246  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:09.773763  390588 type.go:168] "Request Body" body=""
	I1213 10:47:09.773845  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:09.774186  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:10.274330  390588 type.go:168] "Request Body" body=""
	I1213 10:47:10.274409  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:10.274689  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:10.274730  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:10.774642  390588 type.go:168] "Request Body" body=""
	I1213 10:47:10.774724  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:10.775070  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:11.273743  390588 type.go:168] "Request Body" body=""
	I1213 10:47:11.273826  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:11.274166  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:11.773673  390588 type.go:168] "Request Body" body=""
	I1213 10:47:11.773751  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:11.774001  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:12.273773  390588 type.go:168] "Request Body" body=""
	I1213 10:47:12.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:12.274233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:12.773795  390588 type.go:168] "Request Body" body=""
	I1213 10:47:12.773878  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:12.774221  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:12.774276  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:13.273922  390588 type.go:168] "Request Body" body=""
	I1213 10:47:13.273993  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:13.274301  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:13.773767  390588 type.go:168] "Request Body" body=""
	I1213 10:47:13.773837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:13.774158  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:14.273877  390588 type.go:168] "Request Body" body=""
	I1213 10:47:14.273952  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:14.274297  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:14.773969  390588 type.go:168] "Request Body" body=""
	I1213 10:47:14.774038  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:14.774294  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:14.774335  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:15.273792  390588 type.go:168] "Request Body" body=""
	I1213 10:47:15.273867  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:15.274192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:15.773783  390588 type.go:168] "Request Body" body=""
	I1213 10:47:15.773859  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:15.774205  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:16.273875  390588 type.go:168] "Request Body" body=""
	I1213 10:47:16.273951  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:16.274219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:16.773783  390588 type.go:168] "Request Body" body=""
	I1213 10:47:16.773856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:16.775023  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1213 10:47:16.775086  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:17.273732  390588 type.go:168] "Request Body" body=""
	I1213 10:47:17.273805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:17.274097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:17.773664  390588 type.go:168] "Request Body" body=""
	I1213 10:47:17.773749  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:17.774040  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:18.273790  390588 type.go:168] "Request Body" body=""
	I1213 10:47:18.273880  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:18.274223  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:18.773754  390588 type.go:168] "Request Body" body=""
	I1213 10:47:18.773831  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:18.774146  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:19.273714  390588 type.go:168] "Request Body" body=""
	I1213 10:47:19.273784  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:19.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:19.274151  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:19.773784  390588 type.go:168] "Request Body" body=""
	I1213 10:47:19.773873  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:19.774244  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:20.273959  390588 type.go:168] "Request Body" body=""
	I1213 10:47:20.274044  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:20.274394  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:20.774250  390588 type.go:168] "Request Body" body=""
	I1213 10:47:20.774369  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:20.774676  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:21.274708  390588 type.go:168] "Request Body" body=""
	I1213 10:47:21.274781  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:21.275080  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:21.275128  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:21.773729  390588 type.go:168] "Request Body" body=""
	I1213 10:47:21.773812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:21.774174  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:22.273722  390588 type.go:168] "Request Body" body=""
	I1213 10:47:22.273821  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:22.274131  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:22.773835  390588 type.go:168] "Request Body" body=""
	I1213 10:47:22.773910  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:22.774224  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:23.273772  390588 type.go:168] "Request Body" body=""
	I1213 10:47:23.273864  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:23.274153  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:23.774583  390588 type.go:168] "Request Body" body=""
	I1213 10:47:23.774658  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:23.774922  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:23.774974  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:24.274727  390588 type.go:168] "Request Body" body=""
	I1213 10:47:24.274797  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:24.275112  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:24.773773  390588 type.go:168] "Request Body" body=""
	I1213 10:47:24.773868  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:24.774190  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:25.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:47:25.273794  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:25.274148  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:25.773763  390588 type.go:168] "Request Body" body=""
	I1213 10:47:25.773845  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:25.774201  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:26.273894  390588 type.go:168] "Request Body" body=""
	I1213 10:47:26.273970  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:26.274304  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:26.274358  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:26.773709  390588 type.go:168] "Request Body" body=""
	I1213 10:47:26.773784  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:26.774082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:27.273776  390588 type.go:168] "Request Body" body=""
	I1213 10:47:27.273856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:27.274198  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:27.773769  390588 type.go:168] "Request Body" body=""
	I1213 10:47:27.773862  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:27.774181  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:28.273908  390588 type.go:168] "Request Body" body=""
	I1213 10:47:28.273980  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:28.274246  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:28.773791  390588 type.go:168] "Request Body" body=""
	I1213 10:47:28.773871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:28.774221  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:28.774280  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:29.273783  390588 type.go:168] "Request Body" body=""
	I1213 10:47:29.273866  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:29.274195  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:29.773879  390588 type.go:168] "Request Body" body=""
	I1213 10:47:29.773954  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:29.774220  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:30.273792  390588 type.go:168] "Request Body" body=""
	I1213 10:47:30.273887  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:30.274239  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:30.774640  390588 type.go:168] "Request Body" body=""
	I1213 10:47:30.774719  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:30.775063  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:30.775117  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:31.273664  390588 type.go:168] "Request Body" body=""
	I1213 10:47:31.273730  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:31.273976  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:31.773680  390588 type.go:168] "Request Body" body=""
	I1213 10:47:31.773753  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:31.774074  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:32.273770  390588 type.go:168] "Request Body" body=""
	I1213 10:47:32.273856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:32.274200  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:32.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:47:32.773840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:32.774155  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:33.273743  390588 type.go:168] "Request Body" body=""
	I1213 10:47:33.273816  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:33.274165  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:33.274237  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:33.773778  390588 type.go:168] "Request Body" body=""
	I1213 10:47:33.773853  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:33.774193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:34.273877  390588 type.go:168] "Request Body" body=""
	I1213 10:47:34.273952  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:34.274209  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:34.773757  390588 type.go:168] "Request Body" body=""
	I1213 10:47:34.773829  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:34.774154  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:35.273734  390588 type.go:168] "Request Body" body=""
	I1213 10:47:35.273810  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:35.274170  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:35.773845  390588 type.go:168] "Request Body" body=""
	I1213 10:47:35.773920  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:35.774173  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:35.774222  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:36.273675  390588 type.go:168] "Request Body" body=""
	I1213 10:47:36.273750  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:36.274088  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:36.773810  390588 type.go:168] "Request Body" body=""
	I1213 10:47:36.773886  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:36.774215  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:37.273714  390588 type.go:168] "Request Body" body=""
	I1213 10:47:37.273797  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:37.274138  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:37.773767  390588 type.go:168] "Request Body" body=""
	I1213 10:47:37.773861  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:37.774225  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:37.774283  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:38.273949  390588 type.go:168] "Request Body" body=""
	I1213 10:47:38.274035  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:38.274379  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:38.774693  390588 type.go:168] "Request Body" body=""
	I1213 10:47:38.774771  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:38.775056  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:39.273772  390588 type.go:168] "Request Body" body=""
	I1213 10:47:39.273858  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:39.274236  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:39.773832  390588 type.go:168] "Request Body" body=""
	I1213 10:47:39.773906  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:39.774253  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:39.774308  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:40.274521  390588 type.go:168] "Request Body" body=""
	I1213 10:47:40.274596  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:40.274862  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:40.774685  390588 type.go:168] "Request Body" body=""
	I1213 10:47:40.774759  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:40.775099  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:41.273778  390588 type.go:168] "Request Body" body=""
	I1213 10:47:41.273854  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:41.274171  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:41.773727  390588 type.go:168] "Request Body" body=""
	I1213 10:47:41.773800  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:41.774113  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:42.273838  390588 type.go:168] "Request Body" body=""
	I1213 10:47:42.273925  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:42.274281  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:42.274339  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:42.773878  390588 type.go:168] "Request Body" body=""
	I1213 10:47:42.773968  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:42.774283  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:43.273946  390588 type.go:168] "Request Body" body=""
	I1213 10:47:43.274019  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:43.274334  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:43.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:47:43.773829  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:43.774150  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:44.273757  390588 type.go:168] "Request Body" body=""
	I1213 10:47:44.273838  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:44.274183  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:44.773782  390588 type.go:168] "Request Body" body=""
	I1213 10:47:44.773864  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:44.774198  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:44.774253  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:45.273924  390588 type.go:168] "Request Body" body=""
	I1213 10:47:45.274004  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:45.274419  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:45.773843  390588 type.go:168] "Request Body" body=""
	I1213 10:47:45.773923  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:45.774295  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:46.273961  390588 type.go:168] "Request Body" body=""
	I1213 10:47:46.274029  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:46.274287  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:46.773789  390588 type.go:168] "Request Body" body=""
	I1213 10:47:46.773869  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:46.774227  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:46.774283  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:47.273961  390588 type.go:168] "Request Body" body=""
	I1213 10:47:47.274043  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:47.274393  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:47.773713  390588 type.go:168] "Request Body" body=""
	I1213 10:47:47.773795  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:47.774076  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:48.273777  390588 type.go:168] "Request Body" body=""
	I1213 10:47:48.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:48.274213  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:48.773914  390588 type.go:168] "Request Body" body=""
	I1213 10:47:48.773990  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:48.774305  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:48.774364  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:49.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:47:49.273791  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:49.274082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:49.773785  390588 type.go:168] "Request Body" body=""
	I1213 10:47:49.773866  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:49.774184  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:50.273769  390588 type.go:168] "Request Body" body=""
	I1213 10:47:50.273849  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:50.274190  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:50.774233  390588 type.go:168] "Request Body" body=""
	I1213 10:47:50.774309  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:50.774588  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:50.774631  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:51.274650  390588 type.go:168] "Request Body" body=""
	I1213 10:47:51.274724  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:51.275059  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:51.773796  390588 type.go:168] "Request Body" body=""
	I1213 10:47:51.773878  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:51.774236  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:52.274456  390588 type.go:168] "Request Body" body=""
	I1213 10:47:52.274538  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:52.274799  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:52.774588  390588 type.go:168] "Request Body" body=""
	I1213 10:47:52.774666  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:52.775007  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:52.775061  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:53.273753  390588 type.go:168] "Request Body" body=""
	I1213 10:47:53.273833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:53.274191  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:53.773675  390588 type.go:168] "Request Body" body=""
	I1213 10:47:53.773745  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:53.774008  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:54.273722  390588 type.go:168] "Request Body" body=""
	I1213 10:47:54.273801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:54.274131  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:54.773868  390588 type.go:168] "Request Body" body=""
	I1213 10:47:54.773943  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:54.774296  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:55.273989  390588 type.go:168] "Request Body" body=""
	I1213 10:47:55.274065  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:55.274332  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:55.274372  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:55.774037  390588 type.go:168] "Request Body" body=""
	I1213 10:47:55.774114  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:55.774457  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:56.274294  390588 type.go:168] "Request Body" body=""
	I1213 10:47:56.274368  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:56.274696  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:56.774209  390588 type.go:168] "Request Body" body=""
	I1213 10:47:56.774284  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:56.774573  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:57.274365  390588 type.go:168] "Request Body" body=""
	I1213 10:47:57.274443  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:57.274796  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:57.274856  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:57.774615  390588 type.go:168] "Request Body" body=""
	I1213 10:47:57.774691  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:57.775029  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:58.274293  390588 type.go:168] "Request Body" body=""
	I1213 10:47:58.274363  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:58.274642  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:58.774411  390588 type.go:168] "Request Body" body=""
	I1213 10:47:58.774519  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:58.774841  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:59.274495  390588 type.go:168] "Request Body" body=""
	I1213 10:47:59.274571  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:59.274905  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:59.274961  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:59.774120  390588 type.go:168] "Request Body" body=""
	I1213 10:47:59.774186  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:59.774529  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:00.274587  390588 type.go:168] "Request Body" body=""
	I1213 10:48:00.274674  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:00.275002  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:00.773691  390588 type.go:168] "Request Body" body=""
	I1213 10:48:00.773785  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:00.774128  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:01.273694  390588 type.go:168] "Request Body" body=""
	I1213 10:48:01.273766  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:01.274084  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:01.773820  390588 type.go:168] "Request Body" body=""
	I1213 10:48:01.773905  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:01.774301  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:01.774362  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:02.273866  390588 type.go:168] "Request Body" body=""
	I1213 10:48:02.273943  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:02.274265  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:02.773719  390588 type.go:168] "Request Body" body=""
	I1213 10:48:02.773929  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:02.774221  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:03.273773  390588 type.go:168] "Request Body" body=""
	I1213 10:48:03.273855  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:03.274182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:03.773768  390588 type.go:168] "Request Body" body=""
	I1213 10:48:03.773848  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:03.774192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:04.274348  390588 type.go:168] "Request Body" body=""
	I1213 10:48:04.274421  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:04.274701  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:04.274747  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:04.774520  390588 type.go:168] "Request Body" body=""
	I1213 10:48:04.774598  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:04.774955  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:05.274625  390588 type.go:168] "Request Body" body=""
	I1213 10:48:05.274699  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:05.275061  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:05.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:48:05.773840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:05.774191  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:06.273741  390588 type.go:168] "Request Body" body=""
	I1213 10:48:06.273822  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:06.274167  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:06.773880  390588 type.go:168] "Request Body" body=""
	I1213 10:48:06.773956  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:06.774280  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:06.774339  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:07.273666  390588 type.go:168] "Request Body" body=""
	I1213 10:48:07.273739  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:07.274015  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:07.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:48:07.773867  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:07.774227  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:08.273802  390588 type.go:168] "Request Body" body=""
	I1213 10:48:08.273887  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:08.274253  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:08.774404  390588 type.go:168] "Request Body" body=""
	I1213 10:48:08.774472  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:08.774731  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:08.774771  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:09.274521  390588 type.go:168] "Request Body" body=""
	I1213 10:48:09.274602  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:09.274979  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:09.774731  390588 type.go:168] "Request Body" body=""
	I1213 10:48:09.774819  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:09.775148  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:10.274501  390588 type.go:168] "Request Body" body=""
	I1213 10:48:10.274577  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:10.274825  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:10.774685  390588 type.go:168] "Request Body" body=""
	I1213 10:48:10.774760  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:10.775071  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:10.775127  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:11.273657  390588 type.go:168] "Request Body" body=""
	I1213 10:48:11.273737  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:11.274080  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:11.774554  390588 type.go:168] "Request Body" body=""
	I1213 10:48:11.774619  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:11.774916  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:12.274606  390588 type.go:168] "Request Body" body=""
	I1213 10:48:12.274685  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:12.275008  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:12.773772  390588 type.go:168] "Request Body" body=""
	I1213 10:48:12.773849  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:12.774196  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:13.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:48:13.273790  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:13.274085  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:13.274132  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:13.773694  390588 type.go:168] "Request Body" body=""
	I1213 10:48:13.773768  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:13.774050  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:14.273699  390588 type.go:168] "Request Body" body=""
	I1213 10:48:14.273776  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:14.274097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:14.773688  390588 type.go:168] "Request Body" body=""
	I1213 10:48:14.773757  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:14.774016  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:15.273762  390588 type.go:168] "Request Body" body=""
	I1213 10:48:15.273837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:15.274160  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:15.274217  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:15.773796  390588 type.go:168] "Request Body" body=""
	I1213 10:48:15.773874  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:15.774220  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:16.273918  390588 type.go:168] "Request Body" body=""
	I1213 10:48:16.274004  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:16.274258  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:16.773913  390588 type.go:168] "Request Body" body=""
	I1213 10:48:16.773993  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:16.774333  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:17.273914  390588 type.go:168] "Request Body" body=""
	I1213 10:48:17.273989  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:17.274304  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:17.274360  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:17.773705  390588 type.go:168] "Request Body" body=""
	I1213 10:48:17.773779  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:17.774047  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:18.273778  390588 type.go:168] "Request Body" body=""
	I1213 10:48:18.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:18.274175  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:18.773780  390588 type.go:168] "Request Body" body=""
	I1213 10:48:18.773874  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:18.774242  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:19.274520  390588 type.go:168] "Request Body" body=""
	I1213 10:48:19.274589  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:19.274852  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:19.274893  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:19.774642  390588 type.go:168] "Request Body" body=""
	I1213 10:48:19.774722  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:19.775081  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:20.273688  390588 type.go:168] "Request Body" body=""
	I1213 10:48:20.273761  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:20.274090  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:20.773877  390588 type.go:168] "Request Body" body=""
	I1213 10:48:20.773951  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:20.774252  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:21.274225  390588 type.go:168] "Request Body" body=""
	I1213 10:48:21.274303  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:21.274658  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:21.774461  390588 type.go:168] "Request Body" body=""
	I1213 10:48:21.774542  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:21.774931  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:21.774990  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:22.273646  390588 type.go:168] "Request Body" body=""
	I1213 10:48:22.273719  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:22.273971  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:22.773678  390588 type.go:168] "Request Body" body=""
	I1213 10:48:22.773773  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:22.774157  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:23.273879  390588 type.go:168] "Request Body" body=""
	I1213 10:48:23.273951  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:23.274270  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:23.774466  390588 type.go:168] "Request Body" body=""
	I1213 10:48:23.774555  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:23.774828  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:24.274703  390588 type.go:168] "Request Body" body=""
	I1213 10:48:24.274778  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:24.275113  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:24.275166  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:24.773777  390588 type.go:168] "Request Body" body=""
	I1213 10:48:24.773853  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:24.774193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:25.273716  390588 type.go:168] "Request Body" body=""
	I1213 10:48:25.273787  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:25.274055  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:25.773749  390588 type.go:168] "Request Body" body=""
	I1213 10:48:25.773830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:25.774156  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:26.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:48:26.273812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:26.274134  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:26.774405  390588 type.go:168] "Request Body" body=""
	I1213 10:48:26.774477  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:26.774735  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:26.774777  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:27.274550  390588 type.go:168] "Request Body" body=""
	I1213 10:48:27.274638  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:27.274990  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:27.773699  390588 type.go:168] "Request Body" body=""
	I1213 10:48:27.773775  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:27.774125  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:28.274454  390588 type.go:168] "Request Body" body=""
	I1213 10:48:28.274531  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:28.274852  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:28.774642  390588 type.go:168] "Request Body" body=""
	I1213 10:48:28.774713  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:28.775023  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:28.775072  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:29.273765  390588 type.go:168] "Request Body" body=""
	I1213 10:48:29.273840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:29.274166  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:29.773685  390588 type.go:168] "Request Body" body=""
	I1213 10:48:29.773767  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:29.774067  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:30.273776  390588 type.go:168] "Request Body" body=""
	I1213 10:48:30.273858  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:30.274172  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:30.773721  390588 type.go:168] "Request Body" body=""
	I1213 10:48:30.773801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:30.774182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:31.273888  390588 type.go:168] "Request Body" body=""
	I1213 10:48:31.273960  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:31.274245  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:31.274287  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:31.773960  390588 type.go:168] "Request Body" body=""
	I1213 10:48:31.774033  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:31.774353  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:32.273790  390588 type.go:168] "Request Body" body=""
	I1213 10:48:32.273874  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:32.274212  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:32.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:48:32.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:32.774110  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:33.273773  390588 type.go:168] "Request Body" body=""
	I1213 10:48:33.273854  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:33.274167  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:33.773768  390588 type.go:168] "Request Body" body=""
	I1213 10:48:33.773850  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:33.774195  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:33.774250  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:34.274441  390588 type.go:168] "Request Body" body=""
	I1213 10:48:34.274551  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:34.274859  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:34.774530  390588 type.go:168] "Request Body" body=""
	I1213 10:48:34.774653  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:34.774994  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:35.273709  390588 type.go:168] "Request Body" body=""
	I1213 10:48:35.273790  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:35.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:35.773804  390588 type.go:168] "Request Body" body=""
	I1213 10:48:35.773871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:35.774121  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:36.273709  390588 type.go:168] "Request Body" body=""
	I1213 10:48:36.273787  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:36.274129  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:36.274191  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:36.773868  390588 type.go:168] "Request Body" body=""
	I1213 10:48:36.773953  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:36.774291  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:37.273713  390588 type.go:168] "Request Body" body=""
	I1213 10:48:37.273782  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:37.274052  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:37.773728  390588 type.go:168] "Request Body" body=""
	I1213 10:48:37.773807  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:37.774133  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:38.273695  390588 type.go:168] "Request Body" body=""
	I1213 10:48:38.273771  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:38.274096  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:38.774434  390588 type.go:168] "Request Body" body=""
	I1213 10:48:38.774523  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:38.774857  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:38.774915  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:39.274697  390588 type.go:168] "Request Body" body=""
	I1213 10:48:39.274775  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:39.275116  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:39.773799  390588 type.go:168] "Request Body" body=""
	I1213 10:48:39.773875  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:39.774219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:40.274392  390588 type.go:168] "Request Body" body=""
	I1213 10:48:40.274461  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:40.274778  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:40.774600  390588 type.go:168] "Request Body" body=""
	I1213 10:48:40.774675  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:40.774999  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:40.775056  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:41.273683  390588 type.go:168] "Request Body" body=""
	I1213 10:48:41.273758  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:41.274099  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:41.774223  390588 type.go:168] "Request Body" body=""
	I1213 10:48:41.774306  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:41.774579  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:42.274405  390588 type.go:168] "Request Body" body=""
	I1213 10:48:42.274535  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:42.274934  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:42.774574  390588 type.go:168] "Request Body" body=""
	I1213 10:48:42.774658  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:42.775003  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:43.273697  390588 type.go:168] "Request Body" body=""
	I1213 10:48:43.273772  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:43.274034  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:43.274076  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:43.773741  390588 type.go:168] "Request Body" body=""
	I1213 10:48:43.773825  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:43.774164  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:44.273866  390588 type.go:168] "Request Body" body=""
	I1213 10:48:44.273947  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:44.274284  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:44.773701  390588 type.go:168] "Request Body" body=""
	I1213 10:48:44.773793  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:44.774141  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:45.273825  390588 type.go:168] "Request Body" body=""
	I1213 10:48:45.273925  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:45.274348  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:45.274406  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:45.774078  390588 type.go:168] "Request Body" body=""
	I1213 10:48:45.774155  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:45.774567  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:46.274333  390588 type.go:168] "Request Body" body=""
	I1213 10:48:46.274401  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:46.274668  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:46.774394  390588 type.go:168] "Request Body" body=""
	I1213 10:48:46.774466  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:46.774810  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:47.274617  390588 type.go:168] "Request Body" body=""
	I1213 10:48:47.274705  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:47.275033  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:47.275083  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:47.774292  390588 type.go:168] "Request Body" body=""
	I1213 10:48:47.774364  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:47.774696  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:48.274508  390588 type.go:168] "Request Body" body=""
	I1213 10:48:48.274590  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:48.274935  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:48.774610  390588 type.go:168] "Request Body" body=""
	I1213 10:48:48.774685  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:48.775020  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:49.273715  390588 type.go:168] "Request Body" body=""
	I1213 10:48:49.273781  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:49.274042  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:49.773747  390588 type.go:168] "Request Body" body=""
	I1213 10:48:49.773829  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:49.774155  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:49.774228  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:50.273926  390588 type.go:168] "Request Body" body=""
	I1213 10:48:50.274002  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:50.274364  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:50.774202  390588 type.go:168] "Request Body" body=""
	I1213 10:48:50.774276  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:50.774536  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:51.274422  390588 type.go:168] "Request Body" body=""
	I1213 10:48:51.274498  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:51.274822  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:51.774623  390588 type.go:168] "Request Body" body=""
	I1213 10:48:51.774699  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:51.775050  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:51.775104  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:52.273779  390588 type.go:168] "Request Body" body=""
	I1213 10:48:52.273845  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:52.274097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:52.773759  390588 type.go:168] "Request Body" body=""
	I1213 10:48:52.773834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:52.774161  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:53.273848  390588 type.go:168] "Request Body" body=""
	I1213 10:48:53.273927  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:53.274265  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:53.773713  390588 type.go:168] "Request Body" body=""
	I1213 10:48:53.773788  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:53.774090  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:54.273763  390588 type.go:168] "Request Body" body=""
	I1213 10:48:54.273840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:54.274182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:54.274238  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:54.773755  390588 type.go:168] "Request Body" body=""
	I1213 10:48:54.773839  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:54.774143  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:55.273671  390588 type.go:168] "Request Body" body=""
	I1213 10:48:55.273739  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:55.273994  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:55.773662  390588 type.go:168] "Request Body" body=""
	I1213 10:48:55.773743  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:55.774113  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:56.274020  390588 type.go:168] "Request Body" body=""
	I1213 10:48:56.274092  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:56.274398  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:56.274455  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:56.773718  390588 type.go:168] "Request Body" body=""
	I1213 10:48:56.773786  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:56.774114  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:57.273796  390588 type.go:168] "Request Body" body=""
	I1213 10:48:57.273875  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:57.274202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:57.773898  390588 type.go:168] "Request Body" body=""
	I1213 10:48:57.773979  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:57.774308  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:58.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:48:58.273790  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:58.274114  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:58.773788  390588 type.go:168] "Request Body" body=""
	I1213 10:48:58.773908  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:58.774247  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:58.774302  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:59.273809  390588 type.go:168] "Request Body" body=""
	I1213 10:48:59.273892  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:59.274236  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:59.773708  390588 type.go:168] "Request Body" body=""
	I1213 10:48:59.773786  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:59.774102  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:00.273835  390588 type.go:168] "Request Body" body=""
	I1213 10:49:00.273945  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:00.274259  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:00.774386  390588 type.go:168] "Request Body" body=""
	I1213 10:49:00.774468  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:00.774788  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:00.774843  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:01.274715  390588 type.go:168] "Request Body" body=""
	I1213 10:49:01.274784  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:01.275080  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:01.773784  390588 type.go:168] "Request Body" body=""
	I1213 10:49:01.773863  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:01.774155  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:02.273798  390588 type.go:168] "Request Body" body=""
	I1213 10:49:02.273897  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:02.274252  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:02.773815  390588 type.go:168] "Request Body" body=""
	I1213 10:49:02.773883  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:02.774152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:03.273838  390588 type.go:168] "Request Body" body=""
	I1213 10:49:03.273923  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:03.274294  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:03.274348  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:03.773866  390588 type.go:168] "Request Body" body=""
	I1213 10:49:03.773946  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:03.774285  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:04.273977  390588 type.go:168] "Request Body" body=""
	I1213 10:49:04.274050  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:04.274314  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:04.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:49:04.773838  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:04.774178  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:05.273888  390588 type.go:168] "Request Body" body=""
	I1213 10:49:05.273962  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:05.274293  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:05.773962  390588 type.go:168] "Request Body" body=""
	I1213 10:49:05.774033  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:05.774279  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:05.774317  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:06.274277  390588 type.go:168] "Request Body" body=""
	I1213 10:49:06.274357  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:06.274684  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:06.774350  390588 type.go:168] "Request Body" body=""
	I1213 10:49:06.774429  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:06.774754  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:07.274072  390588 type.go:168] "Request Body" body=""
	I1213 10:49:07.274145  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:07.274401  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:07.773761  390588 type.go:168] "Request Body" body=""
	I1213 10:49:07.773839  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:07.774168  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:08.273771  390588 type.go:168] "Request Body" body=""
	I1213 10:49:08.273852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:08.274170  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:08.274229  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:08.773716  390588 type.go:168] "Request Body" body=""
	I1213 10:49:08.773793  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:08.774102  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:09.273765  390588 type.go:168] "Request Body" body=""
	I1213 10:49:09.273840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:09.274179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:09.773911  390588 type.go:168] "Request Body" body=""
	I1213 10:49:09.773987  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:09.774329  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:10.274643  390588 type.go:168] "Request Body" body=""
	I1213 10:49:10.274715  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:10.275018  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:10.275073  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:10.774631  390588 type.go:168] "Request Body" body=""
	I1213 10:49:10.774708  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:10.775082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:11.273712  390588 type.go:168] "Request Body" body=""
	I1213 10:49:11.273785  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:11.274118  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:11.773811  390588 type.go:168] "Request Body" body=""
	I1213 10:49:11.773881  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:11.774141  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:12.273785  390588 type.go:168] "Request Body" body=""
	I1213 10:49:12.273860  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:12.274192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:12.773779  390588 type.go:168] "Request Body" body=""
	I1213 10:49:12.773865  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:12.774208  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:12.774264  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:13.274414  390588 type.go:168] "Request Body" body=""
	I1213 10:49:13.274491  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:13.274806  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:13.774595  390588 type.go:168] "Request Body" body=""
	I1213 10:49:13.774673  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:13.775019  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:14.274700  390588 type.go:168] "Request Body" body=""
	I1213 10:49:14.274776  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:14.275122  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:14.773666  390588 type.go:168] "Request Body" body=""
	I1213 10:49:14.773732  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:14.773982  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:15.273683  390588 type.go:168] "Request Body" body=""
	I1213 10:49:15.273760  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:15.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:15.274153  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:15.773812  390588 type.go:168] "Request Body" body=""
	I1213 10:49:15.773895  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:15.774230  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:16.273920  390588 type.go:168] "Request Body" body=""
	I1213 10:49:16.273995  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:16.274253  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:16.773782  390588 type.go:168] "Request Body" body=""
	I1213 10:49:16.773868  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:16.774406  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:17.274090  390588 type.go:168] "Request Body" body=""
	I1213 10:49:17.274171  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:17.274528  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:17.274584  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:17.774247  390588 type.go:168] "Request Body" body=""
	I1213 10:49:17.774320  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:17.774585  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:18.274376  390588 type.go:168] "Request Body" body=""
	I1213 10:49:18.274452  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:18.274800  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:18.774498  390588 type.go:168] "Request Body" body=""
	I1213 10:49:18.774575  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:18.774922  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:19.274279  390588 type.go:168] "Request Body" body=""
	I1213 10:49:19.274351  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:19.274659  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:19.274729  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:19.774509  390588 type.go:168] "Request Body" body=""
	I1213 10:49:19.774592  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:19.774934  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:20.273655  390588 type.go:168] "Request Body" body=""
	I1213 10:49:20.273729  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:20.274058  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:20.773657  390588 type.go:168] "Request Body" body=""
	I1213 10:49:20.773723  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:20.773970  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:21.273725  390588 type.go:168] "Request Body" body=""
	I1213 10:49:21.273834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:21.274179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:21.773895  390588 type.go:168] "Request Body" body=""
	W1213 10:49:21.773963  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
	I1213 10:49:21.773982  390588 node_ready.go:38] duration metric: took 6m0.000438977s for node "functional-407525" to be "Ready" ...
	I1213 10:49:21.777070  390588 out.go:203] 
	W1213 10:49:21.779923  390588 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:49:21.779945  390588 out.go:285] * 
	W1213 10:49:21.782066  390588 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:49:21.784854  390588 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 10:49:30 functional-407525 crio[5356]: time="2025-12-13T10:49:30.441092452Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=ad0770d6-46ee-472d-84e2-b52693efc812 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:31 functional-407525 crio[5356]: time="2025-12-13T10:49:31.482515404Z" level=info msg="Checking image status: minikube-local-cache-test:functional-407525" id=4068301b-964f-4f0e-b837-bce95c5d9dbc name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:31 functional-407525 crio[5356]: time="2025-12-13T10:49:31.482692702Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 13 10:49:31 functional-407525 crio[5356]: time="2025-12-13T10:49:31.482736034Z" level=info msg="Image minikube-local-cache-test:functional-407525 not found" id=4068301b-964f-4f0e-b837-bce95c5d9dbc name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:31 functional-407525 crio[5356]: time="2025-12-13T10:49:31.48281121Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-407525 found" id=4068301b-964f-4f0e-b837-bce95c5d9dbc name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:31 functional-407525 crio[5356]: time="2025-12-13T10:49:31.506781096Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-407525" id=1aa416ce-74b2-46a5-985c-573303b662d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:31 functional-407525 crio[5356]: time="2025-12-13T10:49:31.506938702Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-407525 not found" id=1aa416ce-74b2-46a5-985c-573303b662d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:31 functional-407525 crio[5356]: time="2025-12-13T10:49:31.506985176Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-407525 found" id=1aa416ce-74b2-46a5-985c-573303b662d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:31 functional-407525 crio[5356]: time="2025-12-13T10:49:31.533925898Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-407525" id=abe4743f-552f-4861-aa05-f9564df92fcd name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:31 functional-407525 crio[5356]: time="2025-12-13T10:49:31.534083545Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-407525 not found" id=abe4743f-552f-4861-aa05-f9564df92fcd name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:31 functional-407525 crio[5356]: time="2025-12-13T10:49:31.534138421Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-407525 found" id=abe4743f-552f-4861-aa05-f9564df92fcd name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:32 functional-407525 crio[5356]: time="2025-12-13T10:49:32.502935443Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=492be620-e0b9-4142-8062-1456f326837a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:32 functional-407525 crio[5356]: time="2025-12-13T10:49:32.842412631Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=a07e4731-b6a1-41b2-b48e-4664da1902b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:32 functional-407525 crio[5356]: time="2025-12-13T10:49:32.842569441Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=a07e4731-b6a1-41b2-b48e-4664da1902b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:32 functional-407525 crio[5356]: time="2025-12-13T10:49:32.842610024Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=a07e4731-b6a1-41b2-b48e-4664da1902b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:33 functional-407525 crio[5356]: time="2025-12-13T10:49:33.389847421Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=9fe27fb5-6769-4d03-b269-b0631ee3e4b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:33 functional-407525 crio[5356]: time="2025-12-13T10:49:33.390003361Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=9fe27fb5-6769-4d03-b269-b0631ee3e4b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:33 functional-407525 crio[5356]: time="2025-12-13T10:49:33.390040564Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=9fe27fb5-6769-4d03-b269-b0631ee3e4b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:33 functional-407525 crio[5356]: time="2025-12-13T10:49:33.414987895Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=3326d30a-f9cb-49f4-b206-bf711f6bc60d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:33 functional-407525 crio[5356]: time="2025-12-13T10:49:33.41511309Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=3326d30a-f9cb-49f4-b206-bf711f6bc60d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:33 functional-407525 crio[5356]: time="2025-12-13T10:49:33.41514948Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=3326d30a-f9cb-49f4-b206-bf711f6bc60d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:33 functional-407525 crio[5356]: time="2025-12-13T10:49:33.455900542Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=44bddb36-d858-4379-aceb-38fead06826d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:33 functional-407525 crio[5356]: time="2025-12-13T10:49:33.456029783Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=44bddb36-d858-4379-aceb-38fead06826d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:33 functional-407525 crio[5356]: time="2025-12-13T10:49:33.456072614Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=44bddb36-d858-4379-aceb-38fead06826d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:34 functional-407525 crio[5356]: time="2025-12-13T10:49:34.014137748Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=38865e3b-a4f0-4f21-855b-8b4194495f1f name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:49:35.553007    9357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:35.553597    9357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:35.554653    9357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:35.555230    9357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:35.556920    9357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	[Dec13 10:24] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:25] overlayfs: idmapped layers are currently not supported
	[  +0.081323] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:31] overlayfs: idmapped layers are currently not supported
	[Dec13 10:32] overlayfs: idmapped layers are currently not supported
	[Dec13 10:42] hrtimer: interrupt took 21684953 ns
	[Dec13 10:49] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:49:35 up  2:32,  0 user,  load average: 0.52, 0.32, 0.73
	Linux functional-407525 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:49:32 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:49:33 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1151.
	Dec 13 10:49:33 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:33 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:33 functional-407525 kubelet[9201]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:33 functional-407525 kubelet[9201]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:33 functional-407525 kubelet[9201]: E1213 10:49:33.525518    9201 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:49:33 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:49:33 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:49:34 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1152.
	Dec 13 10:49:34 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:34 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:34 functional-407525 kubelet[9254]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:34 functional-407525 kubelet[9254]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:34 functional-407525 kubelet[9254]: E1213 10:49:34.339101    9254 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:49:34 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:49:34 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:49:35 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1153.
	Dec 13 10:49:35 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:35 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:35 functional-407525 kubelet[9274]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:35 functional-407525 kubelet[9274]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:35 functional-407525 kubelet[9274]: E1213 10:49:35.090880    9274 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:49:35 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:49:35 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525: exit status 2 (309.504329ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-407525" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-407525 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-407525 get pods: exit status 1 (126.314004ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-407525 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-407525
helpers_test.go:244: (dbg) docker inspect functional-407525:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	        "Created": "2025-12-13T10:34:59.162458661Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 385126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:34:59.230276401Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hostname",
	        "HostsPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hosts",
	        "LogPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7-json.log",
	        "Name": "/functional-407525",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-407525:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-407525",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	                "LowerDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-407525",
	                "Source": "/var/lib/docker/volumes/functional-407525/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-407525",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-407525",
	                "name.minikube.sigs.k8s.io": "functional-407525",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb8c72e3de62f4751cebe2c5a489ec3040a7f771c4c912b4414d5eb26c67d8e4",
	            "SandboxKey": "/var/run/docker/netns/fb8c72e3de62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-407525": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:c5:1d:c8:5d:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8bb3fce07852261971da0e26f4e28c90471b6da820443a0b657c0bf09d2f7042",
	                    "EndpointID": "3a907b06ccc449fc18f0cf71710374046514d7011757e3e81bb1c73b267fe8c9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-407525",
	                        "7fc3d6bd328a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525: exit status 2 (303.684331ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-407525 logs -n 25: (1.040604101s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-371413 image ls --format short --alsologtostderr                                                                                       │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image   │ functional-371413 image ls --format yaml --alsologtostderr                                                                                        │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ ssh     │ functional-371413 ssh pgrep buildkitd                                                                                                             │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ image   │ functional-371413 image build -t localhost/my-image:functional-371413 testdata/build --alsologtostderr                                            │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image   │ functional-371413 image ls                                                                                                                        │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image   │ functional-371413 image ls --format json --alsologtostderr                                                                                        │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image   │ functional-371413 image ls --format table --alsologtostderr                                                                                       │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ delete  │ -p functional-371413                                                                                                                              │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ start   │ -p functional-407525 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ start   │ -p functional-407525 --alsologtostderr -v=8                                                                                                       │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:43 UTC │                     │
	│ cache   │ functional-407525 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ functional-407525 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ functional-407525 cache add registry.k8s.io/pause:latest                                                                                          │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ functional-407525 cache add minikube-local-cache-test:functional-407525                                                                           │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ functional-407525 cache delete minikube-local-cache-test:functional-407525                                                                        │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ ssh     │ functional-407525 ssh sudo crictl images                                                                                                          │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ ssh     │ functional-407525 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ ssh     │ functional-407525 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │                     │
	│ cache   │ functional-407525 cache reload                                                                                                                    │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ ssh     │ functional-407525 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ kubectl │ functional-407525 kubectl -- --context functional-407525 get pods                                                                                 │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:43:16
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:43:16.189245  390588 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:43:16.189385  390588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:43:16.189397  390588 out.go:374] Setting ErrFile to fd 2...
	I1213 10:43:16.189403  390588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:43:16.189684  390588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:43:16.190095  390588 out.go:368] Setting JSON to false
	I1213 10:43:16.190986  390588 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8749,"bootTime":1765613848,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:43:16.191060  390588 start.go:143] virtualization:  
	I1213 10:43:16.194511  390588 out.go:179] * [functional-407525] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:43:16.198204  390588 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:43:16.198321  390588 notify.go:221] Checking for updates...
	I1213 10:43:16.204163  390588 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:43:16.207088  390588 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:16.209934  390588 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 10:43:16.212863  390588 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:43:16.215711  390588 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:43:16.219166  390588 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:43:16.219330  390588 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:43:16.245531  390588 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:43:16.245660  390588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:43:16.304777  390588 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:43:16.295770012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:43:16.304888  390588 docker.go:319] overlay module found
	I1213 10:43:16.309644  390588 out.go:179] * Using the docker driver based on existing profile
	I1213 10:43:16.312430  390588 start.go:309] selected driver: docker
	I1213 10:43:16.312447  390588 start.go:927] validating driver "docker" against &{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:43:16.312556  390588 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:43:16.312654  390588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:43:16.369591  390588 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 10:43:16.360947105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:43:16.370024  390588 cni.go:84] Creating CNI manager for ""
	I1213 10:43:16.370077  390588 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:43:16.370130  390588 start.go:353] cluster config:
	{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:43:16.374951  390588 out.go:179] * Starting "functional-407525" primary control-plane node in "functional-407525" cluster
	I1213 10:43:16.377750  390588 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:43:16.380575  390588 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:43:16.383625  390588 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:43:16.383675  390588 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:43:16.383684  390588 cache.go:65] Caching tarball of preloaded images
	I1213 10:43:16.383721  390588 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:43:16.383768  390588 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:43:16.383779  390588 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 10:43:16.383909  390588 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/config.json ...
	I1213 10:43:16.402414  390588 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:43:16.402437  390588 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:43:16.402458  390588 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:43:16.402490  390588 start.go:360] acquireMachinesLock for functional-407525: {Name:mkb9a6ddeb0e93e626919e03dc3c989f045e07da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:43:16.402563  390588 start.go:364] duration metric: took 38.359µs to acquireMachinesLock for "functional-407525"
	I1213 10:43:16.402589  390588 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:43:16.402599  390588 fix.go:54] fixHost starting: 
	I1213 10:43:16.402860  390588 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:43:16.419664  390588 fix.go:112] recreateIfNeeded on functional-407525: state=Running err=<nil>
	W1213 10:43:16.419692  390588 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:43:16.423019  390588 out.go:252] * Updating the running docker "functional-407525" container ...
	I1213 10:43:16.423065  390588 machine.go:94] provisionDockerMachine start ...
	I1213 10:43:16.423166  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:16.440791  390588 main.go:143] libmachine: Using SSH client type: native
	I1213 10:43:16.441132  390588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:43:16.441147  390588 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:43:16.590928  390588 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-407525
	
	I1213 10:43:16.590952  390588 ubuntu.go:182] provisioning hostname "functional-407525"
	I1213 10:43:16.591012  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:16.608907  390588 main.go:143] libmachine: Using SSH client type: native
	I1213 10:43:16.609223  390588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:43:16.609243  390588 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-407525 && echo "functional-407525" | sudo tee /etc/hostname
	I1213 10:43:16.770512  390588 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-407525
	
	I1213 10:43:16.770629  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:16.791074  390588 main.go:143] libmachine: Using SSH client type: native
	I1213 10:43:16.791392  390588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:43:16.791418  390588 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-407525' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-407525/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-407525' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:43:16.939938  390588 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:43:16.939965  390588 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 10:43:16.940042  390588 ubuntu.go:190] setting up certificates
	I1213 10:43:16.940060  390588 provision.go:84] configureAuth start
	I1213 10:43:16.940146  390588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-407525
	I1213 10:43:16.959231  390588 provision.go:143] copyHostCerts
	I1213 10:43:16.959277  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 10:43:16.959321  390588 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 10:43:16.959334  390588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 10:43:16.959423  390588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 10:43:16.959550  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 10:43:16.959579  390588 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 10:43:16.959590  390588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 10:43:16.959624  390588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 10:43:16.959682  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 10:43:16.959708  390588 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 10:43:16.959712  390588 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 10:43:16.959738  390588 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 10:43:16.959842  390588 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.functional-407525 san=[127.0.0.1 192.168.49.2 functional-407525 localhost minikube]
	I1213 10:43:17.067458  390588 provision.go:177] copyRemoteCerts
	I1213 10:43:17.067620  390588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:43:17.067673  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.087609  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:17.191151  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 10:43:17.191266  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:43:17.208031  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 10:43:17.208139  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 10:43:17.224829  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 10:43:17.224888  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:43:17.242075  390588 provision.go:87] duration metric: took 301.967659ms to configureAuth
	I1213 10:43:17.242106  390588 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:43:17.242287  390588 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:43:17.242396  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.259726  390588 main.go:143] libmachine: Using SSH client type: native
	I1213 10:43:17.260059  390588 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:43:17.260089  390588 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:43:17.589136  390588 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:43:17.589164  390588 machine.go:97] duration metric: took 1.166089785s to provisionDockerMachine
	I1213 10:43:17.589176  390588 start.go:293] postStartSetup for "functional-407525" (driver="docker")
	I1213 10:43:17.589189  390588 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:43:17.589251  390588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:43:17.589299  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.609214  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:17.715839  390588 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:43:17.719089  390588 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1213 10:43:17.719109  390588 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1213 10:43:17.719114  390588 command_runner.go:130] > VERSION_ID="12"
	I1213 10:43:17.719118  390588 command_runner.go:130] > VERSION="12 (bookworm)"
	I1213 10:43:17.719124  390588 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1213 10:43:17.719128  390588 command_runner.go:130] > ID=debian
	I1213 10:43:17.719139  390588 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1213 10:43:17.719147  390588 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1213 10:43:17.719152  390588 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1213 10:43:17.719195  390588 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:43:17.719216  390588 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:43:17.719233  390588 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 10:43:17.719286  390588 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 10:43:17.719370  390588 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 10:43:17.719381  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> /etc/ssl/certs/3563282.pem
	I1213 10:43:17.719455  390588 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts -> hosts in /etc/test/nested/copy/356328
	I1213 10:43:17.719463  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts -> /etc/test/nested/copy/356328/hosts
	I1213 10:43:17.719505  390588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/356328
	I1213 10:43:17.727090  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 10:43:17.744131  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts --> /etc/test/nested/copy/356328/hosts (40 bytes)
	I1213 10:43:17.760861  390588 start.go:296] duration metric: took 171.654498ms for postStartSetup
	I1213 10:43:17.760950  390588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:43:17.760996  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.777913  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:17.880295  390588 command_runner.go:130] > 14%
	I1213 10:43:17.880360  390588 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:43:17.884436  390588 command_runner.go:130] > 169G
	I1213 10:43:17.884867  390588 fix.go:56] duration metric: took 1.482264041s for fixHost
	I1213 10:43:17.884887  390588 start.go:83] releasing machines lock for "functional-407525", held for 1.482310261s
	I1213 10:43:17.884953  390588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-407525
	I1213 10:43:17.902293  390588 ssh_runner.go:195] Run: cat /version.json
	I1213 10:43:17.902324  390588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:43:17.902343  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.902383  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:17.922251  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:17.922884  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:18.027684  390588 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765275396-22083", "minikube_version": "v1.37.0", "commit": "9f3959633d311997d75aab86f8ff840f224c6486"}
	I1213 10:43:18.027820  390588 ssh_runner.go:195] Run: systemctl --version
	I1213 10:43:18.121469  390588 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1213 10:43:18.124198  390588 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1213 10:43:18.124239  390588 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1213 10:43:18.124329  390588 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:43:18.162710  390588 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 10:43:18.167030  390588 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1213 10:43:18.167242  390588 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:43:18.167335  390588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:43:18.175207  390588 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:43:18.175230  390588 start.go:496] detecting cgroup driver to use...
	I1213 10:43:18.175264  390588 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:43:18.175320  390588 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:43:18.190633  390588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:43:18.203672  390588 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:43:18.203747  390588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:43:18.219163  390588 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:43:18.232309  390588 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:43:18.357889  390588 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:43:18.493929  390588 docker.go:234] disabling docker service ...
	I1213 10:43:18.494052  390588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:43:18.509796  390588 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:43:18.523416  390588 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:43:18.655317  390588 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:43:18.778247  390588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:43:18.791182  390588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:43:18.805083  390588 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1213 10:43:18.806588  390588 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:43:18.806679  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.815701  390588 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:43:18.815803  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.824913  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.834321  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.843170  390588 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:43:18.851373  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.860701  390588 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.869075  390588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:18.877860  390588 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:43:18.884514  390588 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1213 10:43:18.885462  390588 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:43:18.893210  390588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:43:19.009167  390588 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:43:19.185094  390588 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:43:19.185195  390588 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:43:19.189492  390588 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1213 10:43:19.189518  390588 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1213 10:43:19.189526  390588 command_runner.go:130] > Device: 0,72	Inode: 1638        Links: 1
	I1213 10:43:19.189541  390588 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:43:19.189566  390588 command_runner.go:130] > Access: 2025-12-13 10:43:19.120971949 +0000
	I1213 10:43:19.189581  390588 command_runner.go:130] > Modify: 2025-12-13 10:43:19.120971949 +0000
	I1213 10:43:19.189586  390588 command_runner.go:130] > Change: 2025-12-13 10:43:19.120971949 +0000
	I1213 10:43:19.189590  390588 command_runner.go:130] >  Birth: -
	I1213 10:43:19.190244  390588 start.go:564] Will wait 60s for crictl version
	I1213 10:43:19.190335  390588 ssh_runner.go:195] Run: which crictl
	I1213 10:43:19.193561  390588 command_runner.go:130] > /usr/local/bin/crictl
	I1213 10:43:19.194286  390588 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:43:19.222711  390588 command_runner.go:130] > Version:  0.1.0
	I1213 10:43:19.222747  390588 command_runner.go:130] > RuntimeName:  cri-o
	I1213 10:43:19.222752  390588 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1213 10:43:19.222773  390588 command_runner.go:130] > RuntimeApiVersion:  v1
	I1213 10:43:19.225058  390588 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:43:19.225194  390588 ssh_runner.go:195] Run: crio --version
	I1213 10:43:19.255970  390588 command_runner.go:130] > crio version 1.34.3
	I1213 10:43:19.256013  390588 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 10:43:19.256019  390588 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 10:43:19.256025  390588 command_runner.go:130] >    GitTreeState:   dirty
	I1213 10:43:19.256044  390588 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 10:43:19.256051  390588 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 10:43:19.256078  390588 command_runner.go:130] >    Compiler:       gc
	I1213 10:43:19.256090  390588 command_runner.go:130] >    Platform:       linux/arm64
	I1213 10:43:19.256094  390588 command_runner.go:130] >    Linkmode:       static
	I1213 10:43:19.256098  390588 command_runner.go:130] >    BuildTags:
	I1213 10:43:19.256105  390588 command_runner.go:130] >      static
	I1213 10:43:19.256109  390588 command_runner.go:130] >      netgo
	I1213 10:43:19.256113  390588 command_runner.go:130] >      osusergo
	I1213 10:43:19.256117  390588 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 10:43:19.256123  390588 command_runner.go:130] >      seccomp
	I1213 10:43:19.256128  390588 command_runner.go:130] >      apparmor
	I1213 10:43:19.256131  390588 command_runner.go:130] >      selinux
	I1213 10:43:19.256136  390588 command_runner.go:130] >    LDFlags:          unknown
	I1213 10:43:19.256166  390588 command_runner.go:130] >    SeccompEnabled:   true
	I1213 10:43:19.256195  390588 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 10:43:19.258161  390588 ssh_runner.go:195] Run: crio --version
	I1213 10:43:19.285922  390588 command_runner.go:130] > crio version 1.34.3
	I1213 10:43:19.285950  390588 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1213 10:43:19.285964  390588 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1213 10:43:19.285970  390588 command_runner.go:130] >    GitTreeState:   dirty
	I1213 10:43:19.285975  390588 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1213 10:43:19.285999  390588 command_runner.go:130] >    GoVersion:      go1.24.6
	I1213 10:43:19.286010  390588 command_runner.go:130] >    Compiler:       gc
	I1213 10:43:19.286017  390588 command_runner.go:130] >    Platform:       linux/arm64
	I1213 10:43:19.286022  390588 command_runner.go:130] >    Linkmode:       static
	I1213 10:43:19.286028  390588 command_runner.go:130] >    BuildTags:
	I1213 10:43:19.286046  390588 command_runner.go:130] >      static
	I1213 10:43:19.286056  390588 command_runner.go:130] >      netgo
	I1213 10:43:19.286061  390588 command_runner.go:130] >      osusergo
	I1213 10:43:19.286075  390588 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1213 10:43:19.286093  390588 command_runner.go:130] >      seccomp
	I1213 10:43:19.286102  390588 command_runner.go:130] >      apparmor
	I1213 10:43:19.286108  390588 command_runner.go:130] >      selinux
	I1213 10:43:19.286132  390588 command_runner.go:130] >    LDFlags:          unknown
	I1213 10:43:19.286137  390588 command_runner.go:130] >    SeccompEnabled:   true
	I1213 10:43:19.286153  390588 command_runner.go:130] >    AppArmorEnabled:  false
	I1213 10:43:19.291101  390588 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 10:43:19.293929  390588 cli_runner.go:164] Run: docker network inspect functional-407525 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:43:19.310541  390588 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:43:19.314437  390588 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1213 10:43:19.314776  390588 kubeadm.go:884] updating cluster {Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:43:19.314904  390588 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:43:19.314962  390588 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:43:19.346332  390588 command_runner.go:130] > {
	I1213 10:43:19.346357  390588 command_runner.go:130] >   "images":  [
	I1213 10:43:19.346361  390588 command_runner.go:130] >     {
	I1213 10:43:19.346369  390588 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:43:19.346374  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346380  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:43:19.346383  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346387  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346396  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 10:43:19.346404  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 10:43:19.346411  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346416  390588 command_runner.go:130] >       "size":  "111333938",
	I1213 10:43:19.346423  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346429  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346436  390588 command_runner.go:130] >     },
	I1213 10:43:19.346439  390588 command_runner.go:130] >     {
	I1213 10:43:19.346445  390588 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:43:19.346449  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346457  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:43:19.346467  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346472  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346480  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 10:43:19.346491  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:43:19.346494  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346508  390588 command_runner.go:130] >       "size":  "29037500",
	I1213 10:43:19.346518  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346525  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346531  390588 command_runner.go:130] >     },
	I1213 10:43:19.346535  390588 command_runner.go:130] >     {
	I1213 10:43:19.346541  390588 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:43:19.346548  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346553  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:43:19.346556  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346563  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346571  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 10:43:19.346582  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 10:43:19.346586  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346590  390588 command_runner.go:130] >       "size":  "74491780",
	I1213 10:43:19.346594  390588 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:43:19.346600  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346604  390588 command_runner.go:130] >     },
	I1213 10:43:19.346610  390588 command_runner.go:130] >     {
	I1213 10:43:19.346616  390588 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:43:19.346621  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346628  390588 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:43:19.346632  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346636  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346646  390588 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 10:43:19.346657  390588 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 10:43:19.346661  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346667  390588 command_runner.go:130] >       "size":  "60857170",
	I1213 10:43:19.346671  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.346675  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.346679  390588 command_runner.go:130] >       },
	I1213 10:43:19.346690  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346698  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346702  390588 command_runner.go:130] >     },
	I1213 10:43:19.346705  390588 command_runner.go:130] >     {
	I1213 10:43:19.346715  390588 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:43:19.346722  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346728  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:43:19.346731  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346736  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346745  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 10:43:19.346760  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 10:43:19.346764  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346768  390588 command_runner.go:130] >       "size":  "84949999",
	I1213 10:43:19.346775  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.346778  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.346782  390588 command_runner.go:130] >       },
	I1213 10:43:19.346786  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346796  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346799  390588 command_runner.go:130] >     },
	I1213 10:43:19.346802  390588 command_runner.go:130] >     {
	I1213 10:43:19.346811  390588 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:43:19.346818  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346824  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:43:19.346828  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346832  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346842  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 10:43:19.346851  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 10:43:19.346859  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346863  390588 command_runner.go:130] >       "size":  "72170325",
	I1213 10:43:19.346866  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.346870  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.346875  390588 command_runner.go:130] >       },
	I1213 10:43:19.346879  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346886  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346889  390588 command_runner.go:130] >     },
	I1213 10:43:19.346892  390588 command_runner.go:130] >     {
	I1213 10:43:19.346898  390588 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:43:19.346911  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346917  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:43:19.346923  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346927  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.346934  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 10:43:19.346946  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:43:19.346950  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346954  390588 command_runner.go:130] >       "size":  "74106775",
	I1213 10:43:19.346958  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.346964  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.346967  390588 command_runner.go:130] >     },
	I1213 10:43:19.346970  390588 command_runner.go:130] >     {
	I1213 10:43:19.346977  390588 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:43:19.346984  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.346990  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:43:19.346993  390588 command_runner.go:130] >       ],
	I1213 10:43:19.346997  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.347007  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 10:43:19.347027  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 10:43:19.347034  390588 command_runner.go:130] >       ],
	I1213 10:43:19.347038  390588 command_runner.go:130] >       "size":  "49822549",
	I1213 10:43:19.347041  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.347045  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.347048  390588 command_runner.go:130] >       },
	I1213 10:43:19.347053  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.347058  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.347062  390588 command_runner.go:130] >     },
	I1213 10:43:19.347065  390588 command_runner.go:130] >     {
	I1213 10:43:19.347072  390588 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:43:19.347078  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.347083  390588 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:43:19.347087  390588 command_runner.go:130] >       ],
	I1213 10:43:19.347097  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.347109  390588 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 10:43:19.347120  390588 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 10:43:19.347124  390588 command_runner.go:130] >       ],
	I1213 10:43:19.347132  390588 command_runner.go:130] >       "size":  "519884",
	I1213 10:43:19.347135  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.347140  390588 command_runner.go:130] >         "value":  "65535"
	I1213 10:43:19.347145  390588 command_runner.go:130] >       },
	I1213 10:43:19.347149  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.347155  390588 command_runner.go:130] >       "pinned":  true
	I1213 10:43:19.347158  390588 command_runner.go:130] >     }
	I1213 10:43:19.347161  390588 command_runner.go:130] >   ]
	I1213 10:43:19.347164  390588 command_runner.go:130] > }
	I1213 10:43:19.347379  390588 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:43:19.347391  390588 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:43:19.347452  390588 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:43:19.372755  390588 command_runner.go:130] > {
	I1213 10:43:19.372774  390588 command_runner.go:130] >   "images":  [
	I1213 10:43:19.372779  390588 command_runner.go:130] >     {
	I1213 10:43:19.372788  390588 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1213 10:43:19.372792  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.372799  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1213 10:43:19.372803  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372807  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.372816  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1213 10:43:19.372824  390588 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1213 10:43:19.372828  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372832  390588 command_runner.go:130] >       "size":  "111333938",
	I1213 10:43:19.372836  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.372851  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.372854  390588 command_runner.go:130] >     },
	I1213 10:43:19.372857  390588 command_runner.go:130] >     {
	I1213 10:43:19.372863  390588 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1213 10:43:19.372868  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.372873  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1213 10:43:19.372876  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372880  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.372889  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1213 10:43:19.372897  390588 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1213 10:43:19.372900  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372904  390588 command_runner.go:130] >       "size":  "29037500",
	I1213 10:43:19.372908  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.372920  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.372924  390588 command_runner.go:130] >     },
	I1213 10:43:19.372927  390588 command_runner.go:130] >     {
	I1213 10:43:19.372934  390588 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1213 10:43:19.372938  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.372943  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1213 10:43:19.372947  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372950  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.372958  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1213 10:43:19.372966  390588 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1213 10:43:19.372970  390588 command_runner.go:130] >       ],
	I1213 10:43:19.372973  390588 command_runner.go:130] >       "size":  "74491780",
	I1213 10:43:19.372978  390588 command_runner.go:130] >       "username":  "nonroot",
	I1213 10:43:19.372982  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.372985  390588 command_runner.go:130] >     },
	I1213 10:43:19.372988  390588 command_runner.go:130] >     {
	I1213 10:43:19.372994  390588 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1213 10:43:19.372998  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373002  390588 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1213 10:43:19.373007  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373011  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373018  390588 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1213 10:43:19.373025  390588 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1213 10:43:19.373029  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373033  390588 command_runner.go:130] >       "size":  "60857170",
	I1213 10:43:19.373036  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373040  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.373043  390588 command_runner.go:130] >       },
	I1213 10:43:19.373052  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373056  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373059  390588 command_runner.go:130] >     },
	I1213 10:43:19.373062  390588 command_runner.go:130] >     {
	I1213 10:43:19.373070  390588 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1213 10:43:19.373078  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373083  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1213 10:43:19.373087  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373090  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373098  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1213 10:43:19.373110  390588 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1213 10:43:19.373114  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373118  390588 command_runner.go:130] >       "size":  "84949999",
	I1213 10:43:19.373122  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373126  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.373129  390588 command_runner.go:130] >       },
	I1213 10:43:19.373132  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373136  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373139  390588 command_runner.go:130] >     },
	I1213 10:43:19.373142  390588 command_runner.go:130] >     {
	I1213 10:43:19.373148  390588 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1213 10:43:19.373151  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373157  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1213 10:43:19.373161  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373164  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373172  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1213 10:43:19.373181  390588 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1213 10:43:19.373184  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373188  390588 command_runner.go:130] >       "size":  "72170325",
	I1213 10:43:19.373191  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373195  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.373198  390588 command_runner.go:130] >       },
	I1213 10:43:19.373202  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373206  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373208  390588 command_runner.go:130] >     },
	I1213 10:43:19.373211  390588 command_runner.go:130] >     {
	I1213 10:43:19.373218  390588 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1213 10:43:19.373222  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373230  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1213 10:43:19.373234  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373238  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373246  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1213 10:43:19.373253  390588 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1213 10:43:19.373256  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373260  390588 command_runner.go:130] >       "size":  "74106775",
	I1213 10:43:19.373263  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373267  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373270  390588 command_runner.go:130] >     },
	I1213 10:43:19.373273  390588 command_runner.go:130] >     {
	I1213 10:43:19.373279  390588 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1213 10:43:19.373283  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373288  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1213 10:43:19.373291  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373295  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373303  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1213 10:43:19.373321  390588 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1213 10:43:19.373324  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373328  390588 command_runner.go:130] >       "size":  "49822549",
	I1213 10:43:19.373331  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373336  390588 command_runner.go:130] >         "value":  "0"
	I1213 10:43:19.373339  390588 command_runner.go:130] >       },
	I1213 10:43:19.373343  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373346  390588 command_runner.go:130] >       "pinned":  false
	I1213 10:43:19.373349  390588 command_runner.go:130] >     },
	I1213 10:43:19.373352  390588 command_runner.go:130] >     {
	I1213 10:43:19.373359  390588 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1213 10:43:19.373362  390588 command_runner.go:130] >       "repoTags":  [
	I1213 10:43:19.373367  390588 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1213 10:43:19.373372  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373376  390588 command_runner.go:130] >       "repoDigests":  [
	I1213 10:43:19.373383  390588 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1213 10:43:19.373394  390588 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1213 10:43:19.373398  390588 command_runner.go:130] >       ],
	I1213 10:43:19.373402  390588 command_runner.go:130] >       "size":  "519884",
	I1213 10:43:19.373405  390588 command_runner.go:130] >       "uid":  {
	I1213 10:43:19.373409  390588 command_runner.go:130] >         "value":  "65535"
	I1213 10:43:19.373412  390588 command_runner.go:130] >       },
	I1213 10:43:19.373419  390588 command_runner.go:130] >       "username":  "",
	I1213 10:43:19.373422  390588 command_runner.go:130] >       "pinned":  true
	I1213 10:43:19.373426  390588 command_runner.go:130] >     }
	I1213 10:43:19.373428  390588 command_runner.go:130] >   ]
	I1213 10:43:19.373432  390588 command_runner.go:130] > }
	I1213 10:43:19.375861  390588 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:43:19.375885  390588 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:43:19.375894  390588 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 10:43:19.375988  390588 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-407525 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:43:19.376071  390588 ssh_runner.go:195] Run: crio config
	I1213 10:43:19.425743  390588 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1213 10:43:19.425768  390588 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1213 10:43:19.425775  390588 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1213 10:43:19.425779  390588 command_runner.go:130] > #
	I1213 10:43:19.425787  390588 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1213 10:43:19.425793  390588 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1213 10:43:19.425801  390588 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1213 10:43:19.425810  390588 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1213 10:43:19.425814  390588 command_runner.go:130] > # reload'.
	I1213 10:43:19.425821  390588 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1213 10:43:19.425828  390588 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1213 10:43:19.425838  390588 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1213 10:43:19.425844  390588 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1213 10:43:19.425847  390588 command_runner.go:130] > [crio]
	I1213 10:43:19.425854  390588 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1213 10:43:19.425862  390588 command_runner.go:130] > # containers images, in this directory.
	I1213 10:43:19.426591  390588 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1213 10:43:19.426608  390588 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1213 10:43:19.427294  390588 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1213 10:43:19.427313  390588 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1213 10:43:19.427819  390588 command_runner.go:130] > # imagestore = ""
	I1213 10:43:19.427842  390588 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1213 10:43:19.427850  390588 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1213 10:43:19.428482  390588 command_runner.go:130] > # storage_driver = "overlay"
	I1213 10:43:19.428503  390588 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1213 10:43:19.428511  390588 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1213 10:43:19.428824  390588 command_runner.go:130] > # storage_option = [
	I1213 10:43:19.429159  390588 command_runner.go:130] > # ]
	I1213 10:43:19.429181  390588 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1213 10:43:19.429189  390588 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1213 10:43:19.429811  390588 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1213 10:43:19.429832  390588 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1213 10:43:19.429847  390588 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1213 10:43:19.429857  390588 command_runner.go:130] > # always happen on a node reboot
	I1213 10:43:19.430483  390588 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1213 10:43:19.430528  390588 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1213 10:43:19.430541  390588 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1213 10:43:19.430547  390588 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1213 10:43:19.431051  390588 command_runner.go:130] > # version_file_persist = ""
	I1213 10:43:19.431076  390588 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1213 10:43:19.431086  390588 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1213 10:43:19.431716  390588 command_runner.go:130] > # internal_wipe = true
	I1213 10:43:19.431739  390588 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1213 10:43:19.431747  390588 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1213 10:43:19.432440  390588 command_runner.go:130] > # internal_repair = true
	I1213 10:43:19.432456  390588 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1213 10:43:19.432463  390588 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1213 10:43:19.432469  390588 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1213 10:43:19.432478  390588 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1213 10:43:19.432487  390588 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1213 10:43:19.432491  390588 command_runner.go:130] > [crio.api]
	I1213 10:43:19.432496  390588 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1213 10:43:19.432503  390588 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1213 10:43:19.432512  390588 command_runner.go:130] > # IP address on which the stream server will listen.
	I1213 10:43:19.432517  390588 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1213 10:43:19.432544  390588 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1213 10:43:19.432552  390588 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1213 10:43:19.432851  390588 command_runner.go:130] > # stream_port = "0"
	I1213 10:43:19.432867  390588 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1213 10:43:19.432873  390588 command_runner.go:130] > # stream_enable_tls = false
	I1213 10:43:19.432879  390588 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1213 10:43:19.432886  390588 command_runner.go:130] > # stream_idle_timeout = ""
	I1213 10:43:19.432897  390588 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1213 10:43:19.432906  390588 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1213 10:43:19.433090  390588 command_runner.go:130] > # stream_tls_cert = ""
	I1213 10:43:19.433111  390588 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1213 10:43:19.433117  390588 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1213 10:43:19.433335  390588 command_runner.go:130] > # stream_tls_key = ""
	I1213 10:43:19.433354  390588 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1213 10:43:19.433362  390588 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1213 10:43:19.433373  390588 command_runner.go:130] > # automatically pick up the changes.
	I1213 10:43:19.433389  390588 command_runner.go:130] > # stream_tls_ca = ""
	I1213 10:43:19.433408  390588 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 10:43:19.433419  390588 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1213 10:43:19.433428  390588 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1213 10:43:19.433678  390588 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1213 10:43:19.433694  390588 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1213 10:43:19.433701  390588 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1213 10:43:19.433705  390588 command_runner.go:130] > [crio.runtime]
	I1213 10:43:19.433711  390588 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1213 10:43:19.433719  390588 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1213 10:43:19.433726  390588 command_runner.go:130] > # "nofile=1024:2048"
	I1213 10:43:19.433733  390588 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1213 10:43:19.433737  390588 command_runner.go:130] > # default_ulimits = [
	I1213 10:43:19.433744  390588 command_runner.go:130] > # ]
	I1213 10:43:19.433751  390588 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1213 10:43:19.433758  390588 command_runner.go:130] > # no_pivot = false
	I1213 10:43:19.433764  390588 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1213 10:43:19.433771  390588 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1213 10:43:19.433778  390588 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1213 10:43:19.433785  390588 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1213 10:43:19.433790  390588 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1213 10:43:19.433797  390588 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 10:43:19.433949  390588 command_runner.go:130] > # conmon = ""
	I1213 10:43:19.433968  390588 command_runner.go:130] > # Cgroup setting for conmon
	I1213 10:43:19.433978  390588 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1213 10:43:19.434402  390588 command_runner.go:130] > conmon_cgroup = "pod"
	I1213 10:43:19.434425  390588 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1213 10:43:19.434435  390588 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1213 10:43:19.434446  390588 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1213 10:43:19.434453  390588 command_runner.go:130] > # conmon_env = [
	I1213 10:43:19.434466  390588 command_runner.go:130] > # ]
	I1213 10:43:19.434472  390588 command_runner.go:130] > # Additional environment variables to set for all the
	I1213 10:43:19.434478  390588 command_runner.go:130] > # containers. These are overridden if set in the
	I1213 10:43:19.434484  390588 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1213 10:43:19.434488  390588 command_runner.go:130] > # default_env = [
	I1213 10:43:19.434491  390588 command_runner.go:130] > # ]
	I1213 10:43:19.434497  390588 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1213 10:43:19.434515  390588 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1213 10:43:19.434525  390588 command_runner.go:130] > # selinux = false
	I1213 10:43:19.434535  390588 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1213 10:43:19.434543  390588 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1213 10:43:19.434555  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.434559  390588 command_runner.go:130] > # seccomp_profile = ""
	I1213 10:43:19.434565  390588 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1213 10:43:19.434570  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.434841  390588 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1213 10:43:19.434858  390588 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1213 10:43:19.434865  390588 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1213 10:43:19.434872  390588 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1213 10:43:19.434885  390588 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1213 10:43:19.434891  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.434896  390588 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1213 10:43:19.434902  390588 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1213 10:43:19.434908  390588 command_runner.go:130] > # the cgroup blockio controller.
	I1213 10:43:19.434913  390588 command_runner.go:130] > # blockio_config_file = ""
	I1213 10:43:19.434937  390588 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1213 10:43:19.434946  390588 command_runner.go:130] > # blockio parameters.
	I1213 10:43:19.434950  390588 command_runner.go:130] > # blockio_reload = false
	I1213 10:43:19.434957  390588 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1213 10:43:19.434961  390588 command_runner.go:130] > # irqbalance daemon.
	I1213 10:43:19.434966  390588 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1213 10:43:19.434972  390588 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1213 10:43:19.434982  390588 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1213 10:43:19.434992  390588 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1213 10:43:19.435365  390588 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1213 10:43:19.435381  390588 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1213 10:43:19.435387  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.435392  390588 command_runner.go:130] > # rdt_config_file = ""
	I1213 10:43:19.435398  390588 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1213 10:43:19.435404  390588 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1213 10:43:19.435411  390588 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1213 10:43:19.435584  390588 command_runner.go:130] > # separate_pull_cgroup = ""
	I1213 10:43:19.435601  390588 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1213 10:43:19.435608  390588 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1213 10:43:19.435617  390588 command_runner.go:130] > # will be added.
	I1213 10:43:19.436649  390588 command_runner.go:130] > # default_capabilities = [
	I1213 10:43:19.436661  390588 command_runner.go:130] > # 	"CHOWN",
	I1213 10:43:19.436665  390588 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1213 10:43:19.436669  390588 command_runner.go:130] > # 	"FSETID",
	I1213 10:43:19.436673  390588 command_runner.go:130] > # 	"FOWNER",
	I1213 10:43:19.436679  390588 command_runner.go:130] > # 	"SETGID",
	I1213 10:43:19.436683  390588 command_runner.go:130] > # 	"SETUID",
	I1213 10:43:19.436708  390588 command_runner.go:130] > # 	"SETPCAP",
	I1213 10:43:19.436718  390588 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1213 10:43:19.436722  390588 command_runner.go:130] > # 	"KILL",
	I1213 10:43:19.436725  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436737  390588 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1213 10:43:19.436744  390588 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1213 10:43:19.436749  390588 command_runner.go:130] > # add_inheritable_capabilities = false
	I1213 10:43:19.436759  390588 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1213 10:43:19.436773  390588 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 10:43:19.436777  390588 command_runner.go:130] > default_sysctls = [
	I1213 10:43:19.436788  390588 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1213 10:43:19.436794  390588 command_runner.go:130] > ]
	I1213 10:43:19.436799  390588 command_runner.go:130] > # List of devices on the host that a
	I1213 10:43:19.436806  390588 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1213 10:43:19.436813  390588 command_runner.go:130] > # allowed_devices = [
	I1213 10:43:19.436817  390588 command_runner.go:130] > # 	"/dev/fuse",
	I1213 10:43:19.436820  390588 command_runner.go:130] > # 	"/dev/net/tun",
	I1213 10:43:19.436823  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436828  390588 command_runner.go:130] > # List of additional devices. specified as
	I1213 10:43:19.436836  390588 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1213 10:43:19.436842  390588 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1213 10:43:19.436850  390588 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1213 10:43:19.436857  390588 command_runner.go:130] > # additional_devices = [
	I1213 10:43:19.436861  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436868  390588 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1213 10:43:19.436872  390588 command_runner.go:130] > # cdi_spec_dirs = [
	I1213 10:43:19.436878  390588 command_runner.go:130] > # 	"/etc/cdi",
	I1213 10:43:19.436882  390588 command_runner.go:130] > # 	"/var/run/cdi",
	I1213 10:43:19.436888  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436895  390588 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1213 10:43:19.436904  390588 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1213 10:43:19.436908  390588 command_runner.go:130] > # Defaults to false.
	I1213 10:43:19.436913  390588 command_runner.go:130] > # device_ownership_from_security_context = false
	I1213 10:43:19.436919  390588 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1213 10:43:19.436926  390588 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1213 10:43:19.436930  390588 command_runner.go:130] > # hooks_dir = [
	I1213 10:43:19.436936  390588 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1213 10:43:19.436942  390588 command_runner.go:130] > # ]
	I1213 10:43:19.436948  390588 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1213 10:43:19.436964  390588 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1213 10:43:19.436969  390588 command_runner.go:130] > # its default mounts from the following two files:
	I1213 10:43:19.436973  390588 command_runner.go:130] > #
	I1213 10:43:19.436981  390588 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1213 10:43:19.436992  390588 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1213 10:43:19.437001  390588 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1213 10:43:19.437008  390588 command_runner.go:130] > #
	I1213 10:43:19.437022  390588 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1213 10:43:19.437029  390588 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1213 10:43:19.437035  390588 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1213 10:43:19.437044  390588 command_runner.go:130] > #      only add mounts it finds in this file.
	I1213 10:43:19.437047  390588 command_runner.go:130] > #
	I1213 10:43:19.437051  390588 command_runner.go:130] > # default_mounts_file = ""
	I1213 10:43:19.437059  390588 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1213 10:43:19.437068  390588 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1213 10:43:19.437072  390588 command_runner.go:130] > # pids_limit = -1
	I1213 10:43:19.437078  390588 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1213 10:43:19.437087  390588 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1213 10:43:19.437094  390588 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1213 10:43:19.437104  390588 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1213 10:43:19.437110  390588 command_runner.go:130] > # log_size_max = -1
	I1213 10:43:19.437117  390588 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1213 10:43:19.437124  390588 command_runner.go:130] > # log_to_journald = false
	I1213 10:43:19.437130  390588 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1213 10:43:19.437136  390588 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1213 10:43:19.437143  390588 command_runner.go:130] > # Path to directory for container attach sockets.
	I1213 10:43:19.437149  390588 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1213 10:43:19.437160  390588 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1213 10:43:19.437164  390588 command_runner.go:130] > # bind_mount_prefix = ""
	I1213 10:43:19.437170  390588 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1213 10:43:19.437174  390588 command_runner.go:130] > # read_only = false
	I1213 10:43:19.437180  390588 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1213 10:43:19.437188  390588 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1213 10:43:19.437195  390588 command_runner.go:130] > # live configuration reload.
	I1213 10:43:19.437199  390588 command_runner.go:130] > # log_level = "info"
	I1213 10:43:19.437216  390588 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1213 10:43:19.437221  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.437232  390588 command_runner.go:130] > # log_filter = ""
	I1213 10:43:19.437241  390588 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1213 10:43:19.437248  390588 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1213 10:43:19.437252  390588 command_runner.go:130] > # separated by comma.
	I1213 10:43:19.437260  390588 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:43:19.437264  390588 command_runner.go:130] > # uid_mappings = ""
	I1213 10:43:19.437270  390588 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1213 10:43:19.437280  390588 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1213 10:43:19.437285  390588 command_runner.go:130] > # separated by comma.
	I1213 10:43:19.437295  390588 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:43:19.437301  390588 command_runner.go:130] > # gid_mappings = ""
	I1213 10:43:19.437308  390588 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1213 10:43:19.437314  390588 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 10:43:19.437320  390588 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 10:43:19.437331  390588 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:43:19.437335  390588 command_runner.go:130] > # minimum_mappable_uid = -1
	I1213 10:43:19.437345  390588 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1213 10:43:19.437354  390588 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1213 10:43:19.437361  390588 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1213 10:43:19.437371  390588 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1213 10:43:19.437375  390588 command_runner.go:130] > # minimum_mappable_gid = -1
	I1213 10:43:19.437382  390588 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1213 10:43:19.437390  390588 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1213 10:43:19.437396  390588 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1213 10:43:19.437403  390588 command_runner.go:130] > # ctr_stop_timeout = 30
	I1213 10:43:19.437409  390588 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1213 10:43:19.437416  390588 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1213 10:43:19.437423  390588 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1213 10:43:19.437428  390588 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1213 10:43:19.437432  390588 command_runner.go:130] > # drop_infra_ctr = true
	I1213 10:43:19.437441  390588 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1213 10:43:19.437449  390588 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1213 10:43:19.437457  390588 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1213 10:43:19.437473  390588 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1213 10:43:19.437482  390588 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1213 10:43:19.437491  390588 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1213 10:43:19.437497  390588 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1213 10:43:19.437502  390588 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1213 10:43:19.437506  390588 command_runner.go:130] > # shared_cpuset = ""
	I1213 10:43:19.437511  390588 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1213 10:43:19.437519  390588 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1213 10:43:19.437524  390588 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1213 10:43:19.437534  390588 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1213 10:43:19.437546  390588 command_runner.go:130] > # pinns_path = ""
	I1213 10:43:19.437553  390588 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1213 10:43:19.437560  390588 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1213 10:43:19.437567  390588 command_runner.go:130] > # enable_criu_support = true
	I1213 10:43:19.437573  390588 command_runner.go:130] > # Enable/disable the generation of the container,
	I1213 10:43:19.437579  390588 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1213 10:43:19.437586  390588 command_runner.go:130] > # enable_pod_events = false
	I1213 10:43:19.437593  390588 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1213 10:43:19.437598  390588 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1213 10:43:19.437604  390588 command_runner.go:130] > # default_runtime = "crun"
	I1213 10:43:19.437609  390588 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1213 10:43:19.437619  390588 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1213 10:43:19.437636  390588 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1213 10:43:19.437642  390588 command_runner.go:130] > # creation as a file is not desired either.
	I1213 10:43:19.437653  390588 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1213 10:43:19.437664  390588 command_runner.go:130] > # the hostname is being managed dynamically.
	I1213 10:43:19.437668  390588 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1213 10:43:19.437672  390588 command_runner.go:130] > # ]
	I1213 10:43:19.437678  390588 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1213 10:43:19.437685  390588 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1213 10:43:19.437693  390588 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1213 10:43:19.437708  390588 command_runner.go:130] > # Each entry in the table should follow the format:
	I1213 10:43:19.437715  390588 command_runner.go:130] > #
	I1213 10:43:19.437724  390588 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1213 10:43:19.437729  390588 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1213 10:43:19.437737  390588 command_runner.go:130] > # runtime_type = "oci"
	I1213 10:43:19.437742  390588 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1213 10:43:19.437752  390588 command_runner.go:130] > # inherit_default_runtime = false
	I1213 10:43:19.437760  390588 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1213 10:43:19.437764  390588 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1213 10:43:19.437769  390588 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1213 10:43:19.437775  390588 command_runner.go:130] > # monitor_env = []
	I1213 10:43:19.437780  390588 command_runner.go:130] > # privileged_without_host_devices = false
	I1213 10:43:19.437787  390588 command_runner.go:130] > # allowed_annotations = []
	I1213 10:43:19.437793  390588 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1213 10:43:19.437799  390588 command_runner.go:130] > # no_sync_log = false
	I1213 10:43:19.437803  390588 command_runner.go:130] > # default_annotations = {}
	I1213 10:43:19.437807  390588 command_runner.go:130] > # stream_websockets = false
	I1213 10:43:19.437810  390588 command_runner.go:130] > # seccomp_profile = ""
	I1213 10:43:19.437838  390588 command_runner.go:130] > # Where:
	I1213 10:43:19.437847  390588 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1213 10:43:19.437854  390588 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1213 10:43:19.437860  390588 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1213 10:43:19.437868  390588 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1213 10:43:19.437874  390588 command_runner.go:130] > #   in $PATH.
	I1213 10:43:19.437880  390588 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1213 10:43:19.437888  390588 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1213 10:43:19.437895  390588 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1213 10:43:19.437898  390588 command_runner.go:130] > #   state.
	I1213 10:43:19.437905  390588 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1213 10:43:19.437913  390588 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1213 10:43:19.437920  390588 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1213 10:43:19.437926  390588 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1213 10:43:19.437932  390588 command_runner.go:130] > #   the values from the default runtime on load time.
	I1213 10:43:19.437938  390588 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1213 10:43:19.437949  390588 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1213 10:43:19.437959  390588 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1213 10:43:19.437971  390588 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1213 10:43:19.437976  390588 command_runner.go:130] > #   The currently recognized values are:
	I1213 10:43:19.437983  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1213 10:43:19.437993  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1213 10:43:19.438000  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1213 10:43:19.438006  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1213 10:43:19.438017  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1213 10:43:19.438026  390588 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1213 10:43:19.438042  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1213 10:43:19.438048  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1213 10:43:19.438055  390588 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1213 10:43:19.438064  390588 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1213 10:43:19.438071  390588 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1213 10:43:19.438079  390588 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1213 10:43:19.438091  390588 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1213 10:43:19.438097  390588 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1213 10:43:19.438104  390588 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1213 10:43:19.438114  390588 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1213 10:43:19.438123  390588 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1213 10:43:19.438128  390588 command_runner.go:130] > #   deprecated option "conmon".
	I1213 10:43:19.438135  390588 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1213 10:43:19.438145  390588 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1213 10:43:19.438153  390588 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1213 10:43:19.438160  390588 command_runner.go:130] > #   should be moved to the container's cgroup
	I1213 10:43:19.438168  390588 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1213 10:43:19.438173  390588 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1213 10:43:19.438182  390588 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1213 10:43:19.438186  390588 command_runner.go:130] > #   conmon-rs by using:
	I1213 10:43:19.438194  390588 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1213 10:43:19.438204  390588 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1213 10:43:19.438215  390588 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1213 10:43:19.438228  390588 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1213 10:43:19.438236  390588 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1213 10:43:19.438246  390588 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1213 10:43:19.438254  390588 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1213 10:43:19.438263  390588 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1213 10:43:19.438271  390588 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1213 10:43:19.438280  390588 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1213 10:43:19.438293  390588 command_runner.go:130] > #   when a machine crash happens.
	I1213 10:43:19.438300  390588 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1213 10:43:19.438308  390588 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1213 10:43:19.438322  390588 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1213 10:43:19.438327  390588 command_runner.go:130] > #   seccomp profile for the runtime.
	I1213 10:43:19.438335  390588 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1213 10:43:19.438343  390588 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1213 10:43:19.438346  390588 command_runner.go:130] > #
	I1213 10:43:19.438350  390588 command_runner.go:130] > # Using the seccomp notifier feature:
	I1213 10:43:19.438353  390588 command_runner.go:130] > #
	I1213 10:43:19.438359  390588 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1213 10:43:19.438370  390588 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1213 10:43:19.438376  390588 command_runner.go:130] > #
	I1213 10:43:19.438383  390588 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1213 10:43:19.438392  390588 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1213 10:43:19.438395  390588 command_runner.go:130] > #
	I1213 10:43:19.438401  390588 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1213 10:43:19.438406  390588 command_runner.go:130] > # feature.
	I1213 10:43:19.438410  390588 command_runner.go:130] > #
	I1213 10:43:19.438416  390588 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1213 10:43:19.438422  390588 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1213 10:43:19.438431  390588 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1213 10:43:19.438437  390588 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1213 10:43:19.438447  390588 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1213 10:43:19.438450  390588 command_runner.go:130] > #
	I1213 10:43:19.438456  390588 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1213 10:43:19.438465  390588 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1213 10:43:19.438471  390588 command_runner.go:130] > #
	I1213 10:43:19.438478  390588 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1213 10:43:19.438486  390588 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1213 10:43:19.438491  390588 command_runner.go:130] > #
	I1213 10:43:19.438497  390588 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1213 10:43:19.438512  390588 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1213 10:43:19.438516  390588 command_runner.go:130] > # limitation.
	I1213 10:43:19.438523  390588 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1213 10:43:19.438528  390588 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1213 10:43:19.438533  390588 command_runner.go:130] > runtime_type = ""
	I1213 10:43:19.438539  390588 command_runner.go:130] > runtime_root = "/run/crun"
	I1213 10:43:19.438543  390588 command_runner.go:130] > inherit_default_runtime = false
	I1213 10:43:19.438549  390588 command_runner.go:130] > runtime_config_path = ""
	I1213 10:43:19.438553  390588 command_runner.go:130] > container_min_memory = ""
	I1213 10:43:19.438560  390588 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 10:43:19.438564  390588 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 10:43:19.438577  390588 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 10:43:19.438581  390588 command_runner.go:130] > allowed_annotations = [
	I1213 10:43:19.438586  390588 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1213 10:43:19.438589  390588 command_runner.go:130] > ]
	I1213 10:43:19.438594  390588 command_runner.go:130] > privileged_without_host_devices = false
	I1213 10:43:19.438599  390588 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1213 10:43:19.438604  390588 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1213 10:43:19.438610  390588 command_runner.go:130] > runtime_type = ""
	I1213 10:43:19.438614  390588 command_runner.go:130] > runtime_root = "/run/runc"
	I1213 10:43:19.438617  390588 command_runner.go:130] > inherit_default_runtime = false
	I1213 10:43:19.438621  390588 command_runner.go:130] > runtime_config_path = ""
	I1213 10:43:19.438625  390588 command_runner.go:130] > container_min_memory = ""
	I1213 10:43:19.438633  390588 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1213 10:43:19.438639  390588 command_runner.go:130] > monitor_cgroup = "pod"
	I1213 10:43:19.438644  390588 command_runner.go:130] > monitor_exec_cgroup = ""
	I1213 10:43:19.438649  390588 command_runner.go:130] > privileged_without_host_devices = false
	I1213 10:43:19.438664  390588 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1213 10:43:19.438673  390588 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1213 10:43:19.438684  390588 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1213 10:43:19.438692  390588 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1213 10:43:19.438702  390588 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1213 10:43:19.438712  390588 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1213 10:43:19.438728  390588 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1213 10:43:19.438734  390588 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1213 10:43:19.438743  390588 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1213 10:43:19.438755  390588 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1213 10:43:19.438761  390588 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1213 10:43:19.438772  390588 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1213 10:43:19.438778  390588 command_runner.go:130] > # Example:
	I1213 10:43:19.438782  390588 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1213 10:43:19.438787  390588 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1213 10:43:19.438793  390588 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1213 10:43:19.438801  390588 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1213 10:43:19.438806  390588 command_runner.go:130] > # cpuset = "0-1"
	I1213 10:43:19.438810  390588 command_runner.go:130] > # cpushares = "5"
	I1213 10:43:19.438814  390588 command_runner.go:130] > # cpuquota = "1000"
	I1213 10:43:19.438820  390588 command_runner.go:130] > # cpuperiod = "100000"
	I1213 10:43:19.438825  390588 command_runner.go:130] > # cpulimit = "35"
	I1213 10:43:19.438837  390588 command_runner.go:130] > # Where:
	I1213 10:43:19.438841  390588 command_runner.go:130] > # The workload name is workload-type.
	I1213 10:43:19.438852  390588 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1213 10:43:19.438861  390588 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1213 10:43:19.438866  390588 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1213 10:43:19.438875  390588 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1213 10:43:19.438880  390588 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1213 10:43:19.438885  390588 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1213 10:43:19.438894  390588 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1213 10:43:19.438905  390588 command_runner.go:130] > # Default value is set to true
	I1213 10:43:19.438910  390588 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1213 10:43:19.438915  390588 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1213 10:43:19.438925  390588 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1213 10:43:19.438932  390588 command_runner.go:130] > # Default value is set to 'false'
	I1213 10:43:19.438938  390588 command_runner.go:130] > # disable_hostport_mapping = false
	I1213 10:43:19.438943  390588 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1213 10:43:19.438951  390588 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1213 10:43:19.438954  390588 command_runner.go:130] > # timezone = ""
	I1213 10:43:19.438961  390588 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1213 10:43:19.438967  390588 command_runner.go:130] > #
	I1213 10:43:19.438973  390588 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1213 10:43:19.438979  390588 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1213 10:43:19.438983  390588 command_runner.go:130] > [crio.image]
	I1213 10:43:19.438993  390588 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1213 10:43:19.438999  390588 command_runner.go:130] > # default_transport = "docker://"
	I1213 10:43:19.439005  390588 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1213 10:43:19.439015  390588 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1213 10:43:19.439019  390588 command_runner.go:130] > # global_auth_file = ""
	I1213 10:43:19.439024  390588 command_runner.go:130] > # The image used to instantiate infra containers.
	I1213 10:43:19.439029  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.439034  390588 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1213 10:43:19.439040  390588 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1213 10:43:19.439048  390588 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1213 10:43:19.439055  390588 command_runner.go:130] > # This option supports live configuration reload.
	I1213 10:43:19.439060  390588 command_runner.go:130] > # pause_image_auth_file = ""
	I1213 10:43:19.439066  390588 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1213 10:43:19.439072  390588 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1213 10:43:19.439081  390588 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1213 10:43:19.439087  390588 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1213 10:43:19.439094  390588 command_runner.go:130] > # pause_command = "/pause"
	I1213 10:43:19.439100  390588 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1213 10:43:19.439106  390588 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1213 10:43:19.439111  390588 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1213 10:43:19.439117  390588 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1213 10:43:19.439123  390588 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1213 10:43:19.439134  390588 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1213 10:43:19.439142  390588 command_runner.go:130] > # pinned_images = [
	I1213 10:43:19.439145  390588 command_runner.go:130] > # ]
	I1213 10:43:19.439151  390588 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1213 10:43:19.439157  390588 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1213 10:43:19.439166  390588 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1213 10:43:19.439172  390588 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1213 10:43:19.439180  390588 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1213 10:43:19.439184  390588 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1213 10:43:19.439190  390588 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1213 10:43:19.439197  390588 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1213 10:43:19.439203  390588 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1213 10:43:19.439209  390588 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1213 10:43:19.439223  390588 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1213 10:43:19.439228  390588 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1213 10:43:19.439234  390588 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1213 10:43:19.439243  390588 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1213 10:43:19.439247  390588 command_runner.go:130] > # changing them here.
	I1213 10:43:19.439253  390588 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1213 10:43:19.439260  390588 command_runner.go:130] > # insecure_registries = [
	I1213 10:43:19.439263  390588 command_runner.go:130] > # ]
	I1213 10:43:19.439268  390588 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1213 10:43:19.439273  390588 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1213 10:43:19.439723  390588 command_runner.go:130] > # image_volumes = "mkdir"
	I1213 10:43:19.439741  390588 command_runner.go:130] > # Temporary directory to use for storing big files
	I1213 10:43:19.439879  390588 command_runner.go:130] > # big_files_temporary_dir = ""
	I1213 10:43:19.439918  390588 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1213 10:43:19.439927  390588 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1213 10:43:19.439931  390588 command_runner.go:130] > # auto_reload_registries = false
	I1213 10:43:19.439937  390588 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1213 10:43:19.439946  390588 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1213 10:43:19.439958  390588 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1213 10:43:19.439963  390588 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1213 10:43:19.439974  390588 command_runner.go:130] > # The mode of short name resolution.
	I1213 10:43:19.439985  390588 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1213 10:43:19.439993  390588 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1213 10:43:19.440002  390588 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1213 10:43:19.440006  390588 command_runner.go:130] > # short_name_mode = "enforcing"
	I1213 10:43:19.440012  390588 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1213 10:43:19.440018  390588 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1213 10:43:19.440023  390588 command_runner.go:130] > # oci_artifact_mount_support = true
	I1213 10:43:19.440029  390588 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1213 10:43:19.440034  390588 command_runner.go:130] > # CNI plugins.
	I1213 10:43:19.440037  390588 command_runner.go:130] > [crio.network]
	I1213 10:43:19.440044  390588 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1213 10:43:19.440053  390588 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1213 10:43:19.440058  390588 command_runner.go:130] > # cni_default_network = ""
	I1213 10:43:19.440064  390588 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1213 10:43:19.440073  390588 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1213 10:43:19.440080  390588 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1213 10:43:19.440084  390588 command_runner.go:130] > # plugin_dirs = [
	I1213 10:43:19.440211  390588 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1213 10:43:19.440357  390588 command_runner.go:130] > # ]
	I1213 10:43:19.440384  390588 command_runner.go:130] > # List of included pod metrics.
	I1213 10:43:19.440392  390588 command_runner.go:130] > # included_pod_metrics = [
	I1213 10:43:19.440401  390588 command_runner.go:130] > # ]
	I1213 10:43:19.440408  390588 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1213 10:43:19.440418  390588 command_runner.go:130] > [crio.metrics]
	I1213 10:43:19.440423  390588 command_runner.go:130] > # Globally enable or disable metrics support.
	I1213 10:43:19.440436  390588 command_runner.go:130] > # enable_metrics = false
	I1213 10:43:19.440441  390588 command_runner.go:130] > # Specify enabled metrics collectors.
	I1213 10:43:19.440446  390588 command_runner.go:130] > # Per default all metrics are enabled.
	I1213 10:43:19.440452  390588 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1213 10:43:19.440460  390588 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1213 10:43:19.440472  390588 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1213 10:43:19.440477  390588 command_runner.go:130] > # metrics_collectors = [
	I1213 10:43:19.440481  390588 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1213 10:43:19.440496  390588 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1213 10:43:19.440501  390588 command_runner.go:130] > # 	"containers_oom_total",
	I1213 10:43:19.440506  390588 command_runner.go:130] > # 	"processes_defunct",
	I1213 10:43:19.440509  390588 command_runner.go:130] > # 	"operations_total",
	I1213 10:43:19.440637  390588 command_runner.go:130] > # 	"operations_latency_seconds",
	I1213 10:43:19.440664  390588 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1213 10:43:19.440670  390588 command_runner.go:130] > # 	"operations_errors_total",
	I1213 10:43:19.440688  390588 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1213 10:43:19.440696  390588 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1213 10:43:19.440701  390588 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1213 10:43:19.440705  390588 command_runner.go:130] > # 	"image_pulls_success_total",
	I1213 10:43:19.440716  390588 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1213 10:43:19.440720  390588 command_runner.go:130] > # 	"containers_oom_count_total",
	I1213 10:43:19.440726  390588 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1213 10:43:19.440734  390588 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1213 10:43:19.440739  390588 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1213 10:43:19.440742  390588 command_runner.go:130] > # ]
	I1213 10:43:19.440749  390588 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1213 10:43:19.440758  390588 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1213 10:43:19.440764  390588 command_runner.go:130] > # The port on which the metrics server will listen.
	I1213 10:43:19.440768  390588 command_runner.go:130] > # metrics_port = 9090
	I1213 10:43:19.440773  390588 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1213 10:43:19.440901  390588 command_runner.go:130] > # metrics_socket = ""
	I1213 10:43:19.440915  390588 command_runner.go:130] > # The certificate for the secure metrics server.
	I1213 10:43:19.440937  390588 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1213 10:43:19.440950  390588 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1213 10:43:19.440955  390588 command_runner.go:130] > # certificate on any modification event.
	I1213 10:43:19.440959  390588 command_runner.go:130] > # metrics_cert = ""
	I1213 10:43:19.440964  390588 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1213 10:43:19.440969  390588 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1213 10:43:19.440972  390588 command_runner.go:130] > # metrics_key = ""
	I1213 10:43:19.440978  390588 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1213 10:43:19.440982  390588 command_runner.go:130] > [crio.tracing]
	I1213 10:43:19.440995  390588 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1213 10:43:19.441000  390588 command_runner.go:130] > # enable_tracing = false
	I1213 10:43:19.441006  390588 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1213 10:43:19.441015  390588 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1213 10:43:19.441022  390588 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1213 10:43:19.441031  390588 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1213 10:43:19.441039  390588 command_runner.go:130] > # CRI-O NRI configuration.
	I1213 10:43:19.441042  390588 command_runner.go:130] > [crio.nri]
	I1213 10:43:19.441047  390588 command_runner.go:130] > # Globally enable or disable NRI.
	I1213 10:43:19.441253  390588 command_runner.go:130] > # enable_nri = true
	I1213 10:43:19.441268  390588 command_runner.go:130] > # NRI socket to listen on.
	I1213 10:43:19.441274  390588 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1213 10:43:19.441278  390588 command_runner.go:130] > # NRI plugin directory to use.
	I1213 10:43:19.441283  390588 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1213 10:43:19.441288  390588 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1213 10:43:19.441293  390588 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1213 10:43:19.441298  390588 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1213 10:43:19.441355  390588 command_runner.go:130] > # nri_disable_connections = false
	I1213 10:43:19.441365  390588 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1213 10:43:19.441370  390588 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1213 10:43:19.441374  390588 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1213 10:43:19.441379  390588 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1213 10:43:19.441384  390588 command_runner.go:130] > # NRI default validator configuration.
	I1213 10:43:19.441391  390588 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1213 10:43:19.441401  390588 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1213 10:43:19.441405  390588 command_runner.go:130] > # can be restricted/rejected:
	I1213 10:43:19.441417  390588 command_runner.go:130] > # - OCI hook injection
	I1213 10:43:19.441427  390588 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1213 10:43:19.441435  390588 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1213 10:43:19.441440  390588 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1213 10:43:19.441444  390588 command_runner.go:130] > # - adjustment of linux namespaces
	I1213 10:43:19.441453  390588 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1213 10:43:19.441460  390588 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1213 10:43:19.441466  390588 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1213 10:43:19.441469  390588 command_runner.go:130] > #
	I1213 10:43:19.441473  390588 command_runner.go:130] > # [crio.nri.default_validator]
	I1213 10:43:19.441480  390588 command_runner.go:130] > # nri_enable_default_validator = false
	I1213 10:43:19.441485  390588 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1213 10:43:19.441629  390588 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1213 10:43:19.441658  390588 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1213 10:43:19.441671  390588 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1213 10:43:19.441677  390588 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1213 10:43:19.441685  390588 command_runner.go:130] > # nri_validator_required_plugins = [
	I1213 10:43:19.441688  390588 command_runner.go:130] > # ]
	I1213 10:43:19.441694  390588 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1213 10:43:19.441700  390588 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1213 10:43:19.441709  390588 command_runner.go:130] > [crio.stats]
	I1213 10:43:19.441720  390588 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1213 10:43:19.441730  390588 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1213 10:43:19.441734  390588 command_runner.go:130] > # stats_collection_period = 0
	I1213 10:43:19.441743  390588 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1213 10:43:19.441752  390588 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1213 10:43:19.441756  390588 command_runner.go:130] > # collection_period = 0
	I1213 10:43:19.443275  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.403988128Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1213 10:43:19.443305  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404025092Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1213 10:43:19.443315  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404051931Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1213 10:43:19.443326  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404076596Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1213 10:43:19.443340  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404148548Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:43:19.443352  390588 command_runner.go:130] ! time="2025-12-13T10:43:19.404414955Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1213 10:43:19.443364  390588 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1213 10:43:19.443836  390588 cni.go:84] Creating CNI manager for ""
	I1213 10:43:19.443854  390588 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:43:19.443875  390588 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:43:19.443898  390588 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-407525 NodeName:functional-407525 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:43:19.444025  390588 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-407525"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:43:19.444095  390588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:43:19.450891  390588 command_runner.go:130] > kubeadm
	I1213 10:43:19.450967  390588 command_runner.go:130] > kubectl
	I1213 10:43:19.450987  390588 command_runner.go:130] > kubelet
	I1213 10:43:19.451803  390588 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:43:19.451864  390588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:43:19.459352  390588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 10:43:19.471938  390588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:43:19.485136  390588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 10:43:19.498010  390588 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:43:19.501925  390588 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1213 10:43:19.502045  390588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:43:19.620049  390588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:43:20.022042  390588 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525 for IP: 192.168.49.2
	I1213 10:43:20.022188  390588 certs.go:195] generating shared ca certs ...
	I1213 10:43:20.022221  390588 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:43:20.022446  390588 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 10:43:20.022567  390588 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 10:43:20.022606  390588 certs.go:257] generating profile certs ...
	I1213 10:43:20.022771  390588 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key
	I1213 10:43:20.022893  390588 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key.2185ee04
	I1213 10:43:20.023000  390588 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key
	I1213 10:43:20.023048  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 10:43:20.023081  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 10:43:20.023123  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 10:43:20.023158  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 10:43:20.023202  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 10:43:20.023238  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 10:43:20.023279  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 10:43:20.023318  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 10:43:20.023431  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 10:43:20.023496  390588 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 10:43:20.023540  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:43:20.023607  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:43:20.023670  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:43:20.023728  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 10:43:20.023828  390588 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 10:43:20.023897  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.023941  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem -> /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.023985  390588 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.024591  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:43:20.049939  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:43:20.071962  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:43:20.093520  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:43:20.117621  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:43:20.135349  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:43:20.152883  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:43:20.170121  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 10:43:20.188254  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:43:20.205892  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 10:43:20.223561  390588 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 10:43:20.241467  390588 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:43:20.254691  390588 ssh_runner.go:195] Run: openssl version
	I1213 10:43:20.260777  390588 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1213 10:43:20.261193  390588 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.268769  390588 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 10:43:20.276440  390588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.280293  390588 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.280332  390588 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.280379  390588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 10:43:20.320848  390588 command_runner.go:130] > 3ec20f2e
	I1213 10:43:20.321296  390588 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:43:20.328708  390588 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.335901  390588 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:43:20.343392  390588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.347019  390588 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.347264  390588 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.347323  390588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:43:20.388019  390588 command_runner.go:130] > b5213941
	I1213 10:43:20.388604  390588 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:43:20.396066  390588 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.403389  390588 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 10:43:20.410914  390588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.414772  390588 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.414823  390588 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.414888  390588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 10:43:20.455731  390588 command_runner.go:130] > 51391683
	I1213 10:43:20.456248  390588 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:43:20.463583  390588 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:43:20.467136  390588 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:43:20.467160  390588 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1213 10:43:20.467167  390588 command_runner.go:130] > Device: 259,1	Inode: 1322536     Links: 1
	I1213 10:43:20.467174  390588 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1213 10:43:20.467180  390588 command_runner.go:130] > Access: 2025-12-13 10:39:12.482590700 +0000
	I1213 10:43:20.467186  390588 command_runner.go:130] > Modify: 2025-12-13 10:35:08.216365089 +0000
	I1213 10:43:20.467191  390588 command_runner.go:130] > Change: 2025-12-13 10:35:08.216365089 +0000
	I1213 10:43:20.467197  390588 command_runner.go:130] >  Birth: 2025-12-13 10:35:08.216365089 +0000
	I1213 10:43:20.467264  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:43:20.507794  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.508276  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:43:20.549373  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.549450  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:43:20.591501  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.592041  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:43:20.633163  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.633239  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:43:20.673681  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.674235  390588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:43:20.714863  390588 command_runner.go:130] > Certificate will not expire
	I1213 10:43:20.715372  390588 kubeadm.go:401] StartCluster: {Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:43:20.715472  390588 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:43:20.715572  390588 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:43:20.742591  390588 cri.go:89] found id: ""
	I1213 10:43:20.742663  390588 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:43:20.749676  390588 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1213 10:43:20.749696  390588 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1213 10:43:20.749703  390588 command_runner.go:130] > /var/lib/minikube/etcd:
	I1213 10:43:20.750605  390588 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:43:20.750650  390588 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:43:20.750723  390588 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:43:20.758246  390588 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:43:20.758662  390588 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-407525" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:20.758765  390588 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-354468/kubeconfig needs updating (will repair): [kubeconfig missing "functional-407525" cluster setting kubeconfig missing "functional-407525" context setting]
	I1213 10:43:20.759076  390588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:43:20.759474  390588 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:20.759724  390588 kapi.go:59] client config for functional-407525: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key", CAFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:43:20.760259  390588 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 10:43:20.760282  390588 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 10:43:20.760289  390588 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 10:43:20.760294  390588 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 10:43:20.760299  390588 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 10:43:20.760595  390588 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:43:20.760675  390588 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1213 10:43:20.768313  390588 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1213 10:43:20.768394  390588 kubeadm.go:602] duration metric: took 17.723293ms to restartPrimaryControlPlane
	I1213 10:43:20.768419  390588 kubeadm.go:403] duration metric: took 53.05457ms to StartCluster
	I1213 10:43:20.768469  390588 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:43:20.768581  390588 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:20.769195  390588 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:43:20.769470  390588 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 10:43:20.769730  390588 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:43:20.769792  390588 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 10:43:20.769868  390588 addons.go:70] Setting storage-provisioner=true in profile "functional-407525"
	I1213 10:43:20.769887  390588 addons.go:239] Setting addon storage-provisioner=true in "functional-407525"
	I1213 10:43:20.769967  390588 host.go:66] Checking if "functional-407525" exists ...
	I1213 10:43:20.770424  390588 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:43:20.770582  390588 addons.go:70] Setting default-storageclass=true in profile "functional-407525"
	I1213 10:43:20.770602  390588 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-407525"
	I1213 10:43:20.770845  390588 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:43:20.776047  390588 out.go:179] * Verifying Kubernetes components...
	I1213 10:43:20.778873  390588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:43:20.803376  390588 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 10:43:20.806823  390588 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:20.806848  390588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 10:43:20.806911  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:20.815503  390588 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:43:20.815748  390588 kapi.go:59] client config for functional-407525: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key", CAFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 10:43:20.816048  390588 addons.go:239] Setting addon default-storageclass=true in "functional-407525"
	I1213 10:43:20.816085  390588 host.go:66] Checking if "functional-407525" exists ...
	I1213 10:43:20.816499  390588 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:43:20.849236  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:20.860497  390588 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:20.860524  390588 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 10:43:20.860587  390588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:43:20.893135  390588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:43:20.991835  390588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:43:21.017033  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:21.050080  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:21.773497  390588 node_ready.go:35] waiting up to 6m0s for node "functional-407525" to be "Ready" ...
	I1213 10:43:21.773656  390588 type.go:168] "Request Body" body=""
	I1213 10:43:21.773729  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:21.774009  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:21.774035  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:21.774063  390588 retry.go:31] will retry after 178.71376ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:21.774107  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:21.774121  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:21.774127  390588 retry.go:31] will retry after 267.498ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:21.774194  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:21.953713  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:22.014320  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.018022  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.018057  390588 retry.go:31] will retry after 328.520116ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.042240  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:22.097866  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.101425  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.101460  390588 retry.go:31] will retry after 340.23882ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.273721  390588 type.go:168] "Request Body" body=""
	I1213 10:43:22.273821  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:22.274173  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:22.347588  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:22.405090  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.408724  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.408759  390588 retry.go:31] will retry after 330.053163ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.441890  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:22.497250  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.500831  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.500864  390588 retry.go:31] will retry after 301.657591ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.739051  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:22.774467  390588 type.go:168] "Request Body" body=""
	I1213 10:43:22.774545  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:22.774882  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:22.796776  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.800408  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.800485  390588 retry.go:31] will retry after 1.110001612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.803607  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:22.863746  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:22.863797  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:22.863816  390588 retry.go:31] will retry after 925.323482ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:23.274339  390588 type.go:168] "Request Body" body=""
	I1213 10:43:23.274464  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:23.274793  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:23.774657  390588 type.go:168] "Request Body" body=""
	I1213 10:43:23.774742  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:23.775115  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:23.775193  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:23.789322  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:23.850165  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:23.853613  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:23.853701  390588 retry.go:31] will retry after 1.468677433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:23.910870  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:23.967004  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:23.970690  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:23.970723  390588 retry.go:31] will retry after 1.30336677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:24.274187  390588 type.go:168] "Request Body" body=""
	I1213 10:43:24.274270  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:24.274613  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:24.773719  390588 type.go:168] "Request Body" body=""
	I1213 10:43:24.773812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:24.774104  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:25.273868  390588 type.go:168] "Request Body" body=""
	I1213 10:43:25.273973  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:25.274299  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:25.274422  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:25.322752  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:25.335088  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:25.335126  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:25.335146  390588 retry.go:31] will retry after 1.31175111s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:25.389173  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:25.389228  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:25.389247  390588 retry.go:31] will retry after 1.937290048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:25.773818  390588 type.go:168] "Request Body" body=""
	I1213 10:43:25.773896  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:25.774238  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:26.274714  390588 type.go:168] "Request Body" body=""
	I1213 10:43:26.274790  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:26.275116  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:26.275175  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:26.647823  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:26.708762  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:26.708815  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:26.708835  390588 retry.go:31] will retry after 2.338895321s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:26.773966  390588 type.go:168] "Request Body" body=""
	I1213 10:43:26.774052  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:26.774373  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:27.273820  390588 type.go:168] "Request Body" body=""
	I1213 10:43:27.273894  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:27.274223  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:27.327657  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:27.389087  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:27.389124  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:27.389154  390588 retry.go:31] will retry after 3.77996712s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:27.774250  390588 type.go:168] "Request Body" body=""
	I1213 10:43:27.774347  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:27.774610  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:28.274520  390588 type.go:168] "Request Body" body=""
	I1213 10:43:28.274639  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:28.275025  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:28.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:43:28.773830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:28.774175  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:28.774230  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:29.048671  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:29.108913  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:29.108956  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:29.108976  390588 retry.go:31] will retry after 6.196055786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:29.274133  390588 type.go:168] "Request Body" body=""
	I1213 10:43:29.274210  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:29.274535  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:29.774410  390588 type.go:168] "Request Body" body=""
	I1213 10:43:29.774493  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:29.774856  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:30.274678  390588 type.go:168] "Request Body" body=""
	I1213 10:43:30.274752  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:30.275098  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:30.774546  390588 type.go:168] "Request Body" body=""
	I1213 10:43:30.774615  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:30.774881  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:30.774922  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:31.169380  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:31.223779  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:31.227282  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:31.227315  390588 retry.go:31] will retry after 4.701439473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:31.274644  390588 type.go:168] "Request Body" body=""
	I1213 10:43:31.274723  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:31.275035  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:31.773748  390588 type.go:168] "Request Body" body=""
	I1213 10:43:31.773838  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:31.774143  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:32.273714  390588 type.go:168] "Request Body" body=""
	I1213 10:43:32.273813  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:32.274119  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:32.773782  390588 type.go:168] "Request Body" body=""
	I1213 10:43:32.773855  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:32.774160  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:33.273748  390588 type.go:168] "Request Body" body=""
	I1213 10:43:33.273823  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:33.274181  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:33.274234  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:33.773733  390588 type.go:168] "Request Body" body=""
	I1213 10:43:33.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:33.774115  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:34.273812  390588 type.go:168] "Request Body" body=""
	I1213 10:43:34.273904  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:34.274296  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:34.773742  390588 type.go:168] "Request Body" body=""
	I1213 10:43:34.773818  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:34.774139  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:35.273828  390588 type.go:168] "Request Body" body=""
	I1213 10:43:35.273922  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:35.274192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:35.305578  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:35.371590  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:35.371636  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:35.371657  390588 retry.go:31] will retry after 5.458500829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:35.773766  390588 type.go:168] "Request Body" body=""
	I1213 10:43:35.773846  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:35.774186  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:35.774236  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:35.929536  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:35.989448  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:35.989487  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:35.989506  390588 retry.go:31] will retry after 5.007301518s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:36.274095  390588 type.go:168] "Request Body" body=""
	I1213 10:43:36.274168  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:36.274462  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:36.774043  390588 type.go:168] "Request Body" body=""
	I1213 10:43:36.774126  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:36.774417  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:37.273790  390588 type.go:168] "Request Body" body=""
	I1213 10:43:37.273882  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:37.274210  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:37.773915  390588 type.go:168] "Request Body" body=""
	I1213 10:43:37.773996  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:37.774325  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:37.774386  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:38.274036  390588 type.go:168] "Request Body" body=""
	I1213 10:43:38.274110  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:38.274365  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:38.773780  390588 type.go:168] "Request Body" body=""
	I1213 10:43:38.773871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:38.774179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:39.273872  390588 type.go:168] "Request Body" body=""
	I1213 10:43:39.273948  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:39.274270  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:39.773709  390588 type.go:168] "Request Body" body=""
	I1213 10:43:39.773784  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:39.774053  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:40.273825  390588 type.go:168] "Request Body" body=""
	I1213 10:43:40.273899  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:40.274244  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:40.274309  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:40.774007  390588 type.go:168] "Request Body" body=""
	I1213 10:43:40.774083  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:40.774431  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:40.830857  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:40.888820  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:40.888869  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:40.888889  390588 retry.go:31] will retry after 11.437774943s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:40.997102  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:41.058447  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:41.058511  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:41.058532  390588 retry.go:31] will retry after 7.34875984s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:41.275648  390588 type.go:168] "Request Body" body=""
	I1213 10:43:41.275736  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:41.275995  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:41.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:43:41.773833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:41.774173  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:42.273927  390588 type.go:168] "Request Body" body=""
	I1213 10:43:42.274020  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:42.274372  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:42.274432  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:42.773693  390588 type.go:168] "Request Body" body=""
	I1213 10:43:42.773768  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:42.774092  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:43.273808  390588 type.go:168] "Request Body" body=""
	I1213 10:43:43.273880  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:43.274204  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:43.773920  390588 type.go:168] "Request Body" body=""
	I1213 10:43:43.774021  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:43.774340  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:44.274591  390588 type.go:168] "Request Body" body=""
	I1213 10:43:44.274666  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:44.274925  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:44.274974  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:44.773692  390588 type.go:168] "Request Body" body=""
	I1213 10:43:44.773775  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:44.774117  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:45.273902  390588 type.go:168] "Request Body" body=""
	I1213 10:43:45.273985  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:45.274305  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:45.773737  390588 type.go:168] "Request Body" body=""
	I1213 10:43:45.773808  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:45.774115  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:46.273797  390588 type.go:168] "Request Body" body=""
	I1213 10:43:46.273879  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:46.274217  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:46.774024  390588 type.go:168] "Request Body" body=""
	I1213 10:43:46.774120  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:46.774453  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:46.774515  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:47.273671  390588 type.go:168] "Request Body" body=""
	I1213 10:43:47.273742  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:47.274050  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:47.773764  390588 type.go:168] "Request Body" body=""
	I1213 10:43:47.773857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:47.774219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:48.273933  390588 type.go:168] "Request Body" body=""
	I1213 10:43:48.274033  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:48.274397  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:48.407754  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:43:48.470395  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:48.474021  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:48.474053  390588 retry.go:31] will retry after 19.108505533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:48.774398  390588 type.go:168] "Request Body" body=""
	I1213 10:43:48.774473  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:48.774751  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:48.774803  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:49.274554  390588 type.go:168] "Request Body" body=""
	I1213 10:43:49.274627  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:49.274988  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:49.773726  390588 type.go:168] "Request Body" body=""
	I1213 10:43:49.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:49.774191  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:50.273886  390588 type.go:168] "Request Body" body=""
	I1213 10:43:50.273967  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:50.274244  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:50.774213  390588 type.go:168] "Request Body" body=""
	I1213 10:43:50.774312  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:50.774666  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:51.274525  390588 type.go:168] "Request Body" body=""
	I1213 10:43:51.274611  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:51.274924  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:51.274971  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:51.774634  390588 type.go:168] "Request Body" body=""
	I1213 10:43:51.774715  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:51.774977  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:52.273715  390588 type.go:168] "Request Body" body=""
	I1213 10:43:52.273797  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:52.274174  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:52.327551  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:43:52.388989  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:43:52.389038  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:52.389058  390588 retry.go:31] will retry after 15.332526016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:43:52.774665  390588 type.go:168] "Request Body" body=""
	I1213 10:43:52.774747  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:52.775066  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:53.273766  390588 type.go:168] "Request Body" body=""
	I1213 10:43:53.273838  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:53.274095  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:53.773791  390588 type.go:168] "Request Body" body=""
	I1213 10:43:53.773894  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:53.774202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:53.774258  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:54.273942  390588 type.go:168] "Request Body" body=""
	I1213 10:43:54.274024  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:54.274379  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:54.774619  390588 type.go:168] "Request Body" body=""
	I1213 10:43:54.774685  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:54.774981  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:55.273695  390588 type.go:168] "Request Body" body=""
	I1213 10:43:55.273772  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:55.274098  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:55.774730  390588 type.go:168] "Request Body" body=""
	I1213 10:43:55.774809  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:55.775152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:55.775209  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:56.273860  390588 type.go:168] "Request Body" body=""
	I1213 10:43:56.273937  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:56.274197  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:56.773778  390588 type.go:168] "Request Body" body=""
	I1213 10:43:56.773872  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:56.774188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:57.273779  390588 type.go:168] "Request Body" body=""
	I1213 10:43:57.273871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:57.274186  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:57.774399  390588 type.go:168] "Request Body" body=""
	I1213 10:43:57.774475  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:57.774745  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:58.274628  390588 type.go:168] "Request Body" body=""
	I1213 10:43:58.274703  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:58.275023  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:43:58.275075  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:43:58.773728  390588 type.go:168] "Request Body" body=""
	I1213 10:43:58.773808  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:58.774138  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:59.274411  390588 type.go:168] "Request Body" body=""
	I1213 10:43:59.274483  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:59.274749  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:43:59.774554  390588 type.go:168] "Request Body" body=""
	I1213 10:43:59.774628  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:43:59.774978  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:00.273734  390588 type.go:168] "Request Body" body=""
	I1213 10:44:00.273827  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:00.274198  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:00.774634  390588 type.go:168] "Request Body" body=""
	I1213 10:44:00.774714  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:00.775059  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:00.775121  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:01.273670  390588 type.go:168] "Request Body" body=""
	I1213 10:44:01.273742  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:01.274061  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:01.773708  390588 type.go:168] "Request Body" body=""
	I1213 10:44:01.773778  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:01.774062  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:02.273799  390588 type.go:168] "Request Body" body=""
	I1213 10:44:02.273872  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:02.274204  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:02.773760  390588 type.go:168] "Request Body" body=""
	I1213 10:44:02.773840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:02.774185  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:03.273713  390588 type.go:168] "Request Body" body=""
	I1213 10:44:03.273804  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:03.274108  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:03.274159  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:03.773781  390588 type.go:168] "Request Body" body=""
	I1213 10:44:03.773856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:03.774368  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:04.273809  390588 type.go:168] "Request Body" body=""
	I1213 10:44:04.273910  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:04.274228  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:04.773901  390588 type.go:168] "Request Body" body=""
	I1213 10:44:04.773977  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:04.774242  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:05.273787  390588 type.go:168] "Request Body" body=""
	I1213 10:44:05.273861  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:05.274193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:05.274252  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:05.773910  390588 type.go:168] "Request Body" body=""
	I1213 10:44:05.774005  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:05.774314  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:06.274302  390588 type.go:168] "Request Body" body=""
	I1213 10:44:06.274372  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:06.274644  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:06.774485  390588 type.go:168] "Request Body" body=""
	I1213 10:44:06.774567  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:06.774982  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:07.273730  390588 type.go:168] "Request Body" body=""
	I1213 10:44:07.273828  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:07.274146  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:07.583825  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:44:07.646535  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:07.646580  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:07.646600  390588 retry.go:31] will retry after 14.697551715s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:07.722798  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:44:07.774314  390588 type.go:168] "Request Body" body=""
	I1213 10:44:07.774386  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:07.774682  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:07.774739  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:07.791129  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:07.791173  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:07.791194  390588 retry.go:31] will retry after 13.531528334s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:08.273899  390588 type.go:168] "Request Body" body=""
	I1213 10:44:08.273980  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:08.274336  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:08.774067  390588 type.go:168] "Request Body" body=""
	I1213 10:44:08.774147  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:08.774508  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:09.274290  390588 type.go:168] "Request Body" body=""
	I1213 10:44:09.274369  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:09.274678  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:09.774447  390588 type.go:168] "Request Body" body=""
	I1213 10:44:09.774528  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:09.774864  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:09.774936  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:10.274570  390588 type.go:168] "Request Body" body=""
	I1213 10:44:10.274657  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:10.274961  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:10.774562  390588 type.go:168] "Request Body" body=""
	I1213 10:44:10.774642  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:10.774915  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:11.273679  390588 type.go:168] "Request Body" body=""
	I1213 10:44:11.273789  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:11.274110  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:11.773783  390588 type.go:168] "Request Body" body=""
	I1213 10:44:11.773865  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:11.774164  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:12.273719  390588 type.go:168] "Request Body" body=""
	I1213 10:44:12.273786  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:12.274058  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:12.274098  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:12.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:44:12.773833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:12.774136  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:13.273776  390588 type.go:168] "Request Body" body=""
	I1213 10:44:13.273875  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:13.274215  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:13.773721  390588 type.go:168] "Request Body" body=""
	I1213 10:44:13.773787  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:13.774066  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:14.273794  390588 type.go:168] "Request Body" body=""
	I1213 10:44:14.273871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:14.274227  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:14.274283  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:14.773929  390588 type.go:168] "Request Body" body=""
	I1213 10:44:14.774010  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:14.774363  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:15.273657  390588 type.go:168] "Request Body" body=""
	I1213 10:44:15.273724  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:15.273985  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:15.773757  390588 type.go:168] "Request Body" body=""
	I1213 10:44:15.773863  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:15.774190  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:16.274139  390588 type.go:168] "Request Body" body=""
	I1213 10:44:16.274221  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:16.274567  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:16.274622  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:16.774305  390588 type.go:168] "Request Body" body=""
	I1213 10:44:16.774378  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:16.774644  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:17.274446  390588 type.go:168] "Request Body" body=""
	I1213 10:44:17.274528  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:17.274866  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:17.774497  390588 type.go:168] "Request Body" body=""
	I1213 10:44:17.774575  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:17.774899  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:18.274657  390588 type.go:168] "Request Body" body=""
	I1213 10:44:18.274734  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:18.275051  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:18.275096  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:18.773787  390588 type.go:168] "Request Body" body=""
	I1213 10:44:18.773872  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:18.774209  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:19.273910  390588 type.go:168] "Request Body" body=""
	I1213 10:44:19.273985  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:19.274345  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:19.774026  390588 type.go:168] "Request Body" body=""
	I1213 10:44:19.774099  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:19.774355  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:20.273801  390588 type.go:168] "Request Body" body=""
	I1213 10:44:20.273913  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:20.274223  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:20.773981  390588 type.go:168] "Request Body" body=""
	I1213 10:44:20.774053  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:20.774366  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:20.774423  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:21.274357  390588 type.go:168] "Request Body" body=""
	I1213 10:44:21.274428  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:21.274706  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:21.323061  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:44:21.389635  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:21.389682  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:21.389701  390588 retry.go:31] will retry after 37.789083594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:21.773791  390588 type.go:168] "Request Body" body=""
	I1213 10:44:21.773876  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:21.774224  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:22.273915  390588 type.go:168] "Request Body" body=""
	I1213 10:44:22.273997  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:22.274345  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:22.344570  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:44:22.405449  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:22.405493  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:22.405512  390588 retry.go:31] will retry after 23.725920264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 10:44:22.773711  390588 type.go:168] "Request Body" body=""
	I1213 10:44:22.773782  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:22.774033  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:23.273757  390588 type.go:168] "Request Body" body=""
	I1213 10:44:23.273859  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:23.274206  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:23.274261  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:23.773694  390588 type.go:168] "Request Body" body=""
	I1213 10:44:23.773766  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:23.774054  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:24.274441  390588 type.go:168] "Request Body" body=""
	I1213 10:44:24.274518  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:24.274774  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:24.774608  390588 type.go:168] "Request Body" body=""
	I1213 10:44:24.774678  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:24.774999  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:25.274658  390588 type.go:168] "Request Body" body=""
	I1213 10:44:25.274733  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:25.275077  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:25.275131  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:25.774431  390588 type.go:168] "Request Body" body=""
	I1213 10:44:25.774508  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:25.774773  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:26.274739  390588 type.go:168] "Request Body" body=""
	I1213 10:44:26.274817  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:26.275144  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:26.773790  390588 type.go:168] "Request Body" body=""
	I1213 10:44:26.773863  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:26.774173  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:27.274455  390588 type.go:168] "Request Body" body=""
	I1213 10:44:27.274547  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:27.274811  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:27.774572  390588 type.go:168] "Request Body" body=""
	I1213 10:44:27.774642  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:27.774952  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:27.775003  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:28.274705  390588 type.go:168] "Request Body" body=""
	I1213 10:44:28.274777  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:28.275087  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:28.773642  390588 type.go:168] "Request Body" body=""
	I1213 10:44:28.773716  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:28.773982  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:29.273745  390588 type.go:168] "Request Body" body=""
	I1213 10:44:29.273822  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:29.274155  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:29.773835  390588 type.go:168] "Request Body" body=""
	I1213 10:44:29.773917  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:29.774248  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:30.274557  390588 type.go:168] "Request Body" body=""
	I1213 10:44:30.274641  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:30.274916  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:30.274971  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:30.774540  390588 type.go:168] "Request Body" body=""
	I1213 10:44:30.774632  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:30.774962  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:31.273679  390588 type.go:168] "Request Body" body=""
	I1213 10:44:31.273750  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:31.274077  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:31.774321  390588 type.go:168] "Request Body" body=""
	I1213 10:44:31.774386  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:31.774707  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:32.274525  390588 type.go:168] "Request Body" body=""
	I1213 10:44:32.274604  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:32.274936  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:32.274993  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:32.774698  390588 type.go:168] "Request Body" body=""
	I1213 10:44:32.774804  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:32.775108  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:33.274456  390588 type.go:168] "Request Body" body=""
	I1213 10:44:33.274529  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:33.274787  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:33.774581  390588 type.go:168] "Request Body" body=""
	I1213 10:44:33.774664  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:33.775008  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:34.274708  390588 type.go:168] "Request Body" body=""
	I1213 10:44:34.274794  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:34.275152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:34.275214  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:34.773858  390588 type.go:168] "Request Body" body=""
	I1213 10:44:34.773932  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:34.774188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:35.273780  390588 type.go:168] "Request Body" body=""
	I1213 10:44:35.273867  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:35.274233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:35.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:44:35.773852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:35.774179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:36.273930  390588 type.go:168] "Request Body" body=""
	I1213 10:44:36.274033  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:36.274307  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:36.773735  390588 type.go:168] "Request Body" body=""
	I1213 10:44:36.773807  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:36.774161  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:36.774233  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:37.273748  390588 type.go:168] "Request Body" body=""
	I1213 10:44:37.273822  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:37.274140  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:37.774404  390588 type.go:168] "Request Body" body=""
	I1213 10:44:37.774471  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:37.774822  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:38.274598  390588 type.go:168] "Request Body" body=""
	I1213 10:44:38.274669  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:38.274999  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:38.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:44:38.773807  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:38.774142  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:39.274495  390588 type.go:168] "Request Body" body=""
	I1213 10:44:39.274562  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:39.274851  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:39.274908  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:39.774657  390588 type.go:168] "Request Body" body=""
	I1213 10:44:39.774730  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:39.775049  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:40.273772  390588 type.go:168] "Request Body" body=""
	I1213 10:44:40.273847  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:40.274166  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:40.774227  390588 type.go:168] "Request Body" body=""
	I1213 10:44:40.774300  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:40.774572  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:41.274605  390588 type.go:168] "Request Body" body=""
	I1213 10:44:41.274676  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:41.275014  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:41.275084  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:41.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:44:41.773824  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:41.774152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:42.273842  390588 type.go:168] "Request Body" body=""
	I1213 10:44:42.273921  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:42.274231  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:42.773931  390588 type.go:168] "Request Body" body=""
	I1213 10:44:42.774027  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:42.774383  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:43.273973  390588 type.go:168] "Request Body" body=""
	I1213 10:44:43.274062  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:43.274409  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:43.773648  390588 type.go:168] "Request Body" body=""
	I1213 10:44:43.773733  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:43.773987  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:43.774033  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:44.273702  390588 type.go:168] "Request Body" body=""
	I1213 10:44:44.273808  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:44.274146  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:44.773881  390588 type.go:168] "Request Body" body=""
	I1213 10:44:44.773958  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:44.774291  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:45.273983  390588 type.go:168] "Request Body" body=""
	I1213 10:44:45.274063  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:45.274356  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:45.773766  390588 type.go:168] "Request Body" body=""
	I1213 10:44:45.773844  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:45.774176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:45.774231  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:46.131654  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 10:44:46.194295  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:46.194358  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:46.194451  390588 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:44:46.274603  390588 type.go:168] "Request Body" body=""
	I1213 10:44:46.274700  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:46.275072  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:46.774037  390588 type.go:168] "Request Body" body=""
	I1213 10:44:46.774112  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:46.774387  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:47.273782  390588 type.go:168] "Request Body" body=""
	I1213 10:44:47.273858  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:47.274208  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:47.773755  390588 type.go:168] "Request Body" body=""
	I1213 10:44:47.773830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:47.774174  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:48.273867  390588 type.go:168] "Request Body" body=""
	I1213 10:44:48.273936  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:48.274200  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:48.274241  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:48.773790  390588 type.go:168] "Request Body" body=""
	I1213 10:44:48.773871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:48.774229  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:49.273767  390588 type.go:168] "Request Body" body=""
	I1213 10:44:49.273849  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:49.274193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:49.774519  390588 type.go:168] "Request Body" body=""
	I1213 10:44:49.774595  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:49.774926  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:50.274705  390588 type.go:168] "Request Body" body=""
	I1213 10:44:50.274774  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:50.275102  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:50.275164  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:50.774065  390588 type.go:168] "Request Body" body=""
	I1213 10:44:50.774140  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:50.774471  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:51.274252  390588 type.go:168] "Request Body" body=""
	I1213 10:44:51.274326  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:51.274605  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:51.774340  390588 type.go:168] "Request Body" body=""
	I1213 10:44:51.774416  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:51.774757  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:52.274427  390588 type.go:168] "Request Body" body=""
	I1213 10:44:52.274511  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:52.274882  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:52.774600  390588 type.go:168] "Request Body" body=""
	I1213 10:44:52.774673  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:52.774919  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:52.774958  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:53.274692  390588 type.go:168] "Request Body" body=""
	I1213 10:44:53.274773  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:53.275105  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:53.773804  390588 type.go:168] "Request Body" body=""
	I1213 10:44:53.773878  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:53.774208  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:54.273740  390588 type.go:168] "Request Body" body=""
	I1213 10:44:54.273826  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:54.274090  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:54.773755  390588 type.go:168] "Request Body" body=""
	I1213 10:44:54.773834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:54.774176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:55.273871  390588 type.go:168] "Request Body" body=""
	I1213 10:44:55.273946  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:55.274266  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:55.274336  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:55.773682  390588 type.go:168] "Request Body" body=""
	I1213 10:44:55.773752  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:55.773998  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:56.273698  390588 type.go:168] "Request Body" body=""
	I1213 10:44:56.273771  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:56.274097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:56.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:44:56.773832  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:56.774157  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:57.273838  390588 type.go:168] "Request Body" body=""
	I1213 10:44:57.273924  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:57.274176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:57.773806  390588 type.go:168] "Request Body" body=""
	I1213 10:44:57.773928  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:57.774296  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:57.774354  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:44:58.273798  390588 type.go:168] "Request Body" body=""
	I1213 10:44:58.273873  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:58.274218  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:58.774470  390588 type.go:168] "Request Body" body=""
	I1213 10:44:58.774560  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:58.774811  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:59.179566  390588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 10:44:59.239921  390588 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:59.239971  390588 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 10:44:59.240057  390588 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 10:44:59.247585  390588 out.go:179] * Enabled addons: 
	I1213 10:44:59.249608  390588 addons.go:530] duration metric: took 1m38.479812026s for enable addons: enabled=[]
	I1213 10:44:59.274157  390588 type.go:168] "Request Body" body=""
	I1213 10:44:59.274255  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:59.274564  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:44:59.774339  390588 type.go:168] "Request Body" body=""
	I1213 10:44:59.774421  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:44:59.774764  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:44:59.774833  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:00.278749  390588 type.go:168] "Request Body" body=""
	I1213 10:45:00.278833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:00.279163  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:00.774212  390588 type.go:168] "Request Body" body=""
	I1213 10:45:00.774297  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:00.774688  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:01.274508  390588 type.go:168] "Request Body" body=""
	I1213 10:45:01.274605  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:01.274894  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:01.774686  390588 type.go:168] "Request Body" body=""
	I1213 10:45:01.774765  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:01.775087  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:01.775143  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:02.273808  390588 type.go:168] "Request Body" body=""
	I1213 10:45:02.273892  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:02.274240  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:02.773795  390588 type.go:168] "Request Body" body=""
	I1213 10:45:02.773879  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:02.774138  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:03.273769  390588 type.go:168] "Request Body" body=""
	I1213 10:45:03.273860  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:03.274233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:03.773792  390588 type.go:168] "Request Body" body=""
	I1213 10:45:03.773881  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:03.774233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:04.273949  390588 type.go:168] "Request Body" body=""
	I1213 10:45:04.274036  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:04.274352  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:04.274418  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:04.773787  390588 type.go:168] "Request Body" body=""
	I1213 10:45:04.773869  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:04.774175  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:05.273787  390588 type.go:168] "Request Body" body=""
	I1213 10:45:05.273859  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:05.274192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:05.773881  390588 type.go:168] "Request Body" body=""
	I1213 10:45:05.773957  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:05.774210  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:06.273726  390588 type.go:168] "Request Body" body=""
	I1213 10:45:06.273802  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:06.274127  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:06.773770  390588 type.go:168] "Request Body" body=""
	I1213 10:45:06.773852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:06.774202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:06.774260  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:07.273760  390588 type.go:168] "Request Body" body=""
	I1213 10:45:07.273836  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:07.274400  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:07.773790  390588 type.go:168] "Request Body" body=""
	I1213 10:45:07.773866  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:07.774207  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:08.273795  390588 type.go:168] "Request Body" body=""
	I1213 10:45:08.273920  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:08.274303  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:08.773655  390588 type.go:168] "Request Body" body=""
	I1213 10:45:08.773725  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:08.773989  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:09.273678  390588 type.go:168] "Request Body" body=""
	I1213 10:45:09.273758  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:09.274098  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:09.274153  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:09.773807  390588 type.go:168] "Request Body" body=""
	I1213 10:45:09.773902  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:09.774222  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:10.273946  390588 type.go:168] "Request Body" body=""
	I1213 10:45:10.274017  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:10.274269  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:10.774276  390588 type.go:168] "Request Body" body=""
	I1213 10:45:10.774349  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:10.774733  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:11.274712  390588 type.go:168] "Request Body" body=""
	I1213 10:45:11.274783  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:11.275094  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:11.275143  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:11.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:45:11.773801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:11.774126  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:12.273826  390588 type.go:168] "Request Body" body=""
	I1213 10:45:12.273930  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:12.274257  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:12.773940  390588 type.go:168] "Request Body" body=""
	I1213 10:45:12.774025  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:12.774370  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:13.273711  390588 type.go:168] "Request Body" body=""
	I1213 10:45:13.273799  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:13.274065  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:13.773788  390588 type.go:168] "Request Body" body=""
	I1213 10:45:13.773869  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:13.774187  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:13.774240  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:14.273793  390588 type.go:168] "Request Body" body=""
	I1213 10:45:14.273953  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:14.274293  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:14.773991  390588 type.go:168] "Request Body" body=""
	I1213 10:45:14.774073  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:14.774396  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:15.273772  390588 type.go:168] "Request Body" body=""
	I1213 10:45:15.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:15.274164  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:15.773820  390588 type.go:168] "Request Body" body=""
	I1213 10:45:15.773895  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:15.774219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:15.774280  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:16.274172  390588 type.go:168] "Request Body" body=""
	I1213 10:45:16.274247  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:16.280111  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1213 10:45:16.773739  390588 type.go:168] "Request Body" body=""
	I1213 10:45:16.773818  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:16.774141  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:17.273780  390588 type.go:168] "Request Body" body=""
	I1213 10:45:17.273862  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:17.274194  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:17.773721  390588 type.go:168] "Request Body" body=""
	I1213 10:45:17.773798  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:17.774048  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:18.273782  390588 type.go:168] "Request Body" body=""
	I1213 10:45:18.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:18.274213  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:18.274286  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:18.773986  390588 type.go:168] "Request Body" body=""
	I1213 10:45:18.774078  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:18.774398  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:19.273725  390588 type.go:168] "Request Body" body=""
	I1213 10:45:19.273802  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:19.274082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:19.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:45:19.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:19.774130  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:20.274061  390588 type.go:168] "Request Body" body=""
	I1213 10:45:20.274147  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:20.274521  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:20.274567  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:20.774429  390588 type.go:168] "Request Body" body=""
	I1213 10:45:20.774513  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:20.774784  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:21.274708  390588 type.go:168] "Request Body" body=""
	I1213 10:45:21.274788  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:21.275140  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:21.773809  390588 type.go:168] "Request Body" body=""
	I1213 10:45:21.773886  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:21.774230  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:22.273923  390588 type.go:168] "Request Body" body=""
	I1213 10:45:22.273995  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:22.274330  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:22.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:45:22.773836  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:22.774196  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:22.774266  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:23.273752  390588 type.go:168] "Request Body" body=""
	I1213 10:45:23.273825  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:23.274153  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:23.773854  390588 type.go:168] "Request Body" body=""
	I1213 10:45:23.773925  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:23.774184  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:24.273760  390588 type.go:168] "Request Body" body=""
	I1213 10:45:24.273837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:24.274228  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:24.773775  390588 type.go:168] "Request Body" body=""
	I1213 10:45:24.773852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:24.774188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:25.273932  390588 type.go:168] "Request Body" body=""
	I1213 10:45:25.274007  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:25.274270  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:25.274311  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:25.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:45:25.773835  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:25.774178  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:26.273929  390588 type.go:168] "Request Body" body=""
	I1213 10:45:26.274023  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:26.274342  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:26.774676  390588 type.go:168] "Request Body" body=""
	I1213 10:45:26.774744  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:26.774995  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:27.273699  390588 type.go:168] "Request Body" body=""
	I1213 10:45:27.273783  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:27.274109  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:27.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:45:27.773826  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:27.774163  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:27.774227  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:28.273715  390588 type.go:168] "Request Body" body=""
	I1213 10:45:28.273788  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:28.274057  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:28.773741  390588 type.go:168] "Request Body" body=""
	I1213 10:45:28.773816  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:28.774148  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:29.273858  390588 type.go:168] "Request Body" body=""
	I1213 10:45:29.273934  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:29.274250  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:29.773725  390588 type.go:168] "Request Body" body=""
	I1213 10:45:29.773794  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:29.774055  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:30.273773  390588 type.go:168] "Request Body" body=""
	I1213 10:45:30.273852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:30.274199  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:30.274260  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:30.774238  390588 type.go:168] "Request Body" body=""
	I1213 10:45:30.774312  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:30.774643  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:31.274550  390588 type.go:168] "Request Body" body=""
	I1213 10:45:31.274624  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:31.274882  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:31.774665  390588 type.go:168] "Request Body" body=""
	I1213 10:45:31.774738  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:31.775064  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:32.273753  390588 type.go:168] "Request Body" body=""
	I1213 10:45:32.273830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:32.274149  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:32.773762  390588 type.go:168] "Request Body" body=""
	I1213 10:45:32.773830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:32.774109  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:32.774151  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:33.273762  390588 type.go:168] "Request Body" body=""
	I1213 10:45:33.273841  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:33.274135  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:33.773816  390588 type.go:168] "Request Body" body=""
	I1213 10:45:33.773892  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:33.774227  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:34.274572  390588 type.go:168] "Request Body" body=""
	I1213 10:45:34.274643  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:34.274903  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:34.774657  390588 type.go:168] "Request Body" body=""
	I1213 10:45:34.774729  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:34.775082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:34.775152  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:35.273670  390588 type.go:168] "Request Body" body=""
	I1213 10:45:35.273759  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:35.274117  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:35.774407  390588 type.go:168] "Request Body" body=""
	I1213 10:45:35.774479  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:35.774771  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:36.274663  390588 type.go:168] "Request Body" body=""
	I1213 10:45:36.274756  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:36.275065  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:36.773806  390588 type.go:168] "Request Body" body=""
	I1213 10:45:36.773912  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:36.774265  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:37.273706  390588 type.go:168] "Request Body" body=""
	I1213 10:45:37.273778  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:37.274054  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:37.274104  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:37.773740  390588 type.go:168] "Request Body" body=""
	I1213 10:45:37.773842  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:37.774182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:38.273888  390588 type.go:168] "Request Body" body=""
	I1213 10:45:38.273961  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:38.274293  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:38.773975  390588 type.go:168] "Request Body" body=""
	I1213 10:45:38.774042  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:38.774302  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:39.273778  390588 type.go:168] "Request Body" body=""
	I1213 10:45:39.273861  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:39.274199  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:39.274262  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:39.773743  390588 type.go:168] "Request Body" body=""
	I1213 10:45:39.773824  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:39.774184  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:40.273728  390588 type.go:168] "Request Body" body=""
	I1213 10:45:40.273827  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:40.274144  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:40.774643  390588 type.go:168] "Request Body" body=""
	I1213 10:45:40.774717  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:40.775033  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:41.273691  390588 type.go:168] "Request Body" body=""
	I1213 10:45:41.273765  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:41.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:41.774405  390588 type.go:168] "Request Body" body=""
	I1213 10:45:41.774475  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:41.774789  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:41.774848  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:42.274590  390588 type.go:168] "Request Body" body=""
	I1213 10:45:42.274665  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:42.275006  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:42.773699  390588 type.go:168] "Request Body" body=""
	I1213 10:45:42.773775  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:42.774116  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:43.274417  390588 type.go:168] "Request Body" body=""
	I1213 10:45:43.274505  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:43.274764  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:43.774491  390588 type.go:168] "Request Body" body=""
	I1213 10:45:43.774561  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:43.774931  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:43.774985  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:44.274631  390588 type.go:168] "Request Body" body=""
	I1213 10:45:44.274716  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:44.275082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:44.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:45:44.773832  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:44.774086  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:45.273789  390588 type.go:168] "Request Body" body=""
	I1213 10:45:45.273877  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:45.274215  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:45.773938  390588 type.go:168] "Request Body" body=""
	I1213 10:45:45.774016  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:45.774370  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:46.274211  390588 type.go:168] "Request Body" body=""
	I1213 10:45:46.274311  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:46.274593  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:46.274641  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:46.774347  390588 type.go:168] "Request Body" body=""
	I1213 10:45:46.774423  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:46.774786  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:47.274591  390588 type.go:168] "Request Body" body=""
	I1213 10:45:47.274695  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:47.275064  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:47.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:45:47.773821  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:47.774076  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:48.273791  390588 type.go:168] "Request Body" body=""
	I1213 10:45:48.273871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:48.274221  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:48.773944  390588 type.go:168] "Request Body" body=""
	I1213 10:45:48.774025  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:48.774340  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:48.774398  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:49.273717  390588 type.go:168] "Request Body" body=""
	I1213 10:45:49.273796  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:49.274115  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:49.773760  390588 type.go:168] "Request Body" body=""
	I1213 10:45:49.773837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:49.774152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:50.273796  390588 type.go:168] "Request Body" body=""
	I1213 10:45:50.273881  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:50.274202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:50.774153  390588 type.go:168] "Request Body" body=""
	I1213 10:45:50.774227  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:50.774498  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:50.774547  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:51.274578  390588 type.go:168] "Request Body" body=""
	I1213 10:45:51.274657  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:51.274980  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:51.773696  390588 type.go:168] "Request Body" body=""
	I1213 10:45:51.773772  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:51.774097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:52.273712  390588 type.go:168] "Request Body" body=""
	I1213 10:45:52.273783  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:52.274044  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:52.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:45:52.773841  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:52.774214  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:53.273940  390588 type.go:168] "Request Body" body=""
	I1213 10:45:53.274028  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:53.274362  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:53.274420  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:53.773716  390588 type.go:168] "Request Body" body=""
	I1213 10:45:53.773788  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:53.774109  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:54.273804  390588 type.go:168] "Request Body" body=""
	I1213 10:45:54.273880  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:54.274211  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:54.773918  390588 type.go:168] "Request Body" body=""
	I1213 10:45:54.773996  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:54.774325  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:55.273749  390588 type.go:168] "Request Body" body=""
	I1213 10:45:55.273858  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:55.274197  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:55.773750  390588 type.go:168] "Request Body" body=""
	I1213 10:45:55.773829  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:55.774176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:55.774229  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:56.273954  390588 type.go:168] "Request Body" body=""
	I1213 10:45:56.274030  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:56.274368  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:56.774597  390588 type.go:168] "Request Body" body=""
	I1213 10:45:56.774681  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:56.775019  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:57.273757  390588 type.go:168] "Request Body" body=""
	I1213 10:45:57.273833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:57.274167  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:57.773886  390588 type.go:168] "Request Body" body=""
	I1213 10:45:57.773969  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:57.774297  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:45:57.774351  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:45:58.274008  390588 type.go:168] "Request Body" body=""
	I1213 10:45:58.274074  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:58.274328  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:58.773754  390588 type.go:168] "Request Body" body=""
	I1213 10:45:58.773845  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:58.774179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:59.273755  390588 type.go:168] "Request Body" body=""
	I1213 10:45:59.273831  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:59.274152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:45:59.773661  390588 type.go:168] "Request Body" body=""
	I1213 10:45:59.773729  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:45:59.773978  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:00.273779  390588 type.go:168] "Request Body" body=""
	I1213 10:46:00.273870  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:00.274207  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:00.274265  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:00.774194  390588 type.go:168] "Request Body" body=""
	I1213 10:46:00.774271  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:00.774577  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:01.274425  390588 type.go:168] "Request Body" body=""
	I1213 10:46:01.274499  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:01.274770  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:01.774648  390588 type.go:168] "Request Body" body=""
	I1213 10:46:01.774734  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:01.775108  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:02.273787  390588 type.go:168] "Request Body" body=""
	I1213 10:46:02.273866  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:02.274202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:02.773686  390588 type.go:168] "Request Body" body=""
	I1213 10:46:02.773753  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:02.774020  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:02.774062  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:03.273812  390588 type.go:168] "Request Body" body=""
	I1213 10:46:03.273890  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:03.274214  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:03.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:46:03.773844  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:03.774182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:04.274309  390588 type.go:168] "Request Body" body=""
	I1213 10:46:04.274379  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:04.274657  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:04.774430  390588 type.go:168] "Request Body" body=""
	I1213 10:46:04.774509  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:04.774864  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:04.774924  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:05.274540  390588 type.go:168] "Request Body" body=""
	I1213 10:46:05.274616  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:05.274963  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:05.773676  390588 type.go:168] "Request Body" body=""
	I1213 10:46:05.773758  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:05.774085  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:06.273969  390588 type.go:168] "Request Body" body=""
	I1213 10:46:06.274052  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:06.274459  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:06.773811  390588 type.go:168] "Request Body" body=""
	I1213 10:46:06.773902  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:06.774273  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:07.274619  390588 type.go:168] "Request Body" body=""
	I1213 10:46:07.274708  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:07.274974  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:07.275017  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:07.773671  390588 type.go:168] "Request Body" body=""
	I1213 10:46:07.773768  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:07.774117  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:08.273847  390588 type.go:168] "Request Body" body=""
	I1213 10:46:08.273925  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:08.274261  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:08.773957  390588 type.go:168] "Request Body" body=""
	I1213 10:46:08.774035  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:08.774397  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:09.273804  390588 type.go:168] "Request Body" body=""
	I1213 10:46:09.273894  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:09.274256  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:09.773968  390588 type.go:168] "Request Body" body=""
	I1213 10:46:09.774044  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:09.774403  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:09.774460  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:10.273719  390588 type.go:168] "Request Body" body=""
	I1213 10:46:10.273805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:10.274080  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:10.774136  390588 type.go:168] "Request Body" body=""
	I1213 10:46:10.774210  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:10.774536  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:11.274519  390588 type.go:168] "Request Body" body=""
	I1213 10:46:11.274594  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:11.274918  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:11.774397  390588 type.go:168] "Request Body" body=""
	I1213 10:46:11.774468  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:11.774832  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:11.774891  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:12.274659  390588 type.go:168] "Request Body" body=""
	I1213 10:46:12.274757  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:12.275082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:12.773782  390588 type.go:168] "Request Body" body=""
	I1213 10:46:12.773863  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:12.774233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:13.273921  390588 type.go:168] "Request Body" body=""
	I1213 10:46:13.273994  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:13.274258  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:13.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:46:13.773843  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:13.774234  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:14.273963  390588 type.go:168] "Request Body" body=""
	I1213 10:46:14.274066  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:14.274415  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:14.274474  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:14.773715  390588 type.go:168] "Request Body" body=""
	I1213 10:46:14.773793  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:14.774125  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:15.273806  390588 type.go:168] "Request Body" body=""
	I1213 10:46:15.273885  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:15.274220  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:15.773837  390588 type.go:168] "Request Body" body=""
	I1213 10:46:15.773921  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:15.774333  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:16.274096  390588 type.go:168] "Request Body" body=""
	I1213 10:46:16.274165  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:16.274517  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:16.274565  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:16.774276  390588 type.go:168] "Request Body" body=""
	I1213 10:46:16.774356  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:16.774701  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:17.274489  390588 type.go:168] "Request Body" body=""
	I1213 10:46:17.274563  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:17.274929  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:17.773641  390588 type.go:168] "Request Body" body=""
	I1213 10:46:17.773710  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:17.773957  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:18.274732  390588 type.go:168] "Request Body" body=""
	I1213 10:46:18.274812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:18.275153  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:18.275207  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:18.773906  390588 type.go:168] "Request Body" body=""
	I1213 10:46:18.773982  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:18.774326  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:19.274430  390588 type.go:168] "Request Body" body=""
	I1213 10:46:19.274528  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:19.274794  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:19.774601  390588 type.go:168] "Request Body" body=""
	I1213 10:46:19.774671  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:19.775003  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:20.273724  390588 type.go:168] "Request Body" body=""
	I1213 10:46:20.273806  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:20.274129  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:20.774125  390588 type.go:168] "Request Body" body=""
	I1213 10:46:20.774196  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:20.774577  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:20.774628  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:21.274424  390588 type.go:168] "Request Body" body=""
	I1213 10:46:21.274514  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:21.274834  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:21.774531  390588 type.go:168] "Request Body" body=""
	I1213 10:46:21.774612  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:21.774944  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:22.274640  390588 type.go:168] "Request Body" body=""
	I1213 10:46:22.274709  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:22.275021  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:22.774663  390588 type.go:168] "Request Body" body=""
	I1213 10:46:22.774773  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:22.775134  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:22.775197  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:23.273890  390588 type.go:168] "Request Body" body=""
	I1213 10:46:23.273971  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:23.274309  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:23.773717  390588 type.go:168] "Request Body" body=""
	I1213 10:46:23.773786  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:23.774083  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:24.273734  390588 type.go:168] "Request Body" body=""
	I1213 10:46:24.273813  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:24.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:24.773781  390588 type.go:168] "Request Body" body=""
	I1213 10:46:24.773855  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:24.774193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:25.274593  390588 type.go:168] "Request Body" body=""
	I1213 10:46:25.274667  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:25.274932  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:25.274974  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:25.773688  390588 type.go:168] "Request Body" body=""
	I1213 10:46:25.773769  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:25.774103  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:26.273715  390588 type.go:168] "Request Body" body=""
	I1213 10:46:26.273799  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:26.274187  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:26.773723  390588 type.go:168] "Request Body" body=""
	I1213 10:46:26.773803  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:26.774134  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:27.273777  390588 type.go:168] "Request Body" body=""
	I1213 10:46:27.273856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:27.274211  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:27.773942  390588 type.go:168] "Request Body" body=""
	I1213 10:46:27.774024  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:27.774376  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:27.774430  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:28.274709  390588 type.go:168] "Request Body" body=""
	I1213 10:46:28.274789  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:28.275064  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:28.773835  390588 type.go:168] "Request Body" body=""
	I1213 10:46:28.773920  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:28.774272  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:29.273759  390588 type.go:168] "Request Body" body=""
	I1213 10:46:29.273840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:29.274176  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:29.774348  390588 type.go:168] "Request Body" body=""
	I1213 10:46:29.774419  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:29.774764  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:29.774820  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:30.274620  390588 type.go:168] "Request Body" body=""
	I1213 10:46:30.274696  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:30.275046  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:30.774640  390588 type.go:168] "Request Body" body=""
	I1213 10:46:30.774719  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:30.775077  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:31.273951  390588 type.go:168] "Request Body" body=""
	I1213 10:46:31.274026  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:31.274287  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:31.773775  390588 type.go:168] "Request Body" body=""
	I1213 10:46:31.773856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:31.774181  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:32.273795  390588 type.go:168] "Request Body" body=""
	I1213 10:46:32.273869  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:32.274211  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:32.274272  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:32.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:46:32.773801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:32.774050  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:33.273763  390588 type.go:168] "Request Body" body=""
	I1213 10:46:33.273841  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:33.274191  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:33.773932  390588 type.go:168] "Request Body" body=""
	I1213 10:46:33.774017  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:33.774448  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:34.273707  390588 type.go:168] "Request Body" body=""
	I1213 10:46:34.273777  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:34.274033  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:34.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:46:34.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:34.774164  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:34.774219  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:35.273760  390588 type.go:168] "Request Body" body=""
	I1213 10:46:35.273839  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:35.274188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:35.773757  390588 type.go:168] "Request Body" body=""
	I1213 10:46:35.773834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:35.774091  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:36.273704  390588 type.go:168] "Request Body" body=""
	I1213 10:46:36.273807  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:36.274146  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:36.773734  390588 type.go:168] "Request Body" body=""
	I1213 10:46:36.773812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:36.774138  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:37.273719  390588 type.go:168] "Request Body" body=""
	I1213 10:46:37.273806  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:37.274055  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:37.274109  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:37.773773  390588 type.go:168] "Request Body" body=""
	I1213 10:46:37.773850  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:37.774167  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:38.273869  390588 type.go:168] "Request Body" body=""
	I1213 10:46:38.273941  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:38.274257  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:38.774621  390588 type.go:168] "Request Body" body=""
	I1213 10:46:38.774711  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:38.774971  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:39.273720  390588 type.go:168] "Request Body" body=""
	I1213 10:46:39.273795  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:39.274130  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:39.274185  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:39.773882  390588 type.go:168] "Request Body" body=""
	I1213 10:46:39.773961  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:39.774280  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:40.273738  390588 type.go:168] "Request Body" body=""
	I1213 10:46:40.273832  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:40.274158  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:40.774749  390588 type.go:168] "Request Body" body=""
	I1213 10:46:40.774834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:40.775222  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:41.273940  390588 type.go:168] "Request Body" body=""
	I1213 10:46:41.274026  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:41.274347  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:41.274405  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:41.774636  390588 type.go:168] "Request Body" body=""
	I1213 10:46:41.774701  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:41.774952  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:42.273730  390588 type.go:168] "Request Body" body=""
	I1213 10:46:42.273828  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:42.274210  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:42.773953  390588 type.go:168] "Request Body" body=""
	I1213 10:46:42.774038  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:42.774405  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:43.274638  390588 type.go:168] "Request Body" body=""
	I1213 10:46:43.274705  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:43.274978  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:43.275016  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:43.773701  390588 type.go:168] "Request Body" body=""
	I1213 10:46:43.773806  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:43.774143  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:44.273888  390588 type.go:168] "Request Body" body=""
	I1213 10:46:44.273989  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:44.274363  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:44.774070  390588 type.go:168] "Request Body" body=""
	I1213 10:46:44.774138  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:44.774399  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:45.273823  390588 type.go:168] "Request Body" body=""
	I1213 10:46:45.273898  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:45.274268  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:45.773995  390588 type.go:168] "Request Body" body=""
	I1213 10:46:45.774070  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:45.774394  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:45.774448  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:46.274246  390588 type.go:168] "Request Body" body=""
	I1213 10:46:46.274313  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:46.274596  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:46.774345  390588 type.go:168] "Request Body" body=""
	I1213 10:46:46.774417  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:46.774765  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:47.274423  390588 type.go:168] "Request Body" body=""
	I1213 10:46:47.274522  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:47.274846  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:47.774170  390588 type.go:168] "Request Body" body=""
	I1213 10:46:47.774241  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:47.774544  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:47.774600  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:48.274170  390588 type.go:168] "Request Body" body=""
	I1213 10:46:48.274257  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:48.274614  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:48.774460  390588 type.go:168] "Request Body" body=""
	I1213 10:46:48.774547  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:48.774903  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:49.274601  390588 type.go:168] "Request Body" body=""
	I1213 10:46:49.274681  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:49.274964  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:49.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:46:49.773817  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:49.774156  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:50.273855  390588 type.go:168] "Request Body" body=""
	I1213 10:46:50.273935  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:50.274285  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:50.274341  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:50.774135  390588 type.go:168] "Request Body" body=""
	I1213 10:46:50.774202  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:50.774454  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:51.274467  390588 type.go:168] "Request Body" body=""
	I1213 10:46:51.274552  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:51.274884  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:51.774669  390588 type.go:168] "Request Body" body=""
	I1213 10:46:51.774754  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:51.775052  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:52.273723  390588 type.go:168] "Request Body" body=""
	I1213 10:46:52.273794  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:52.274094  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:52.773761  390588 type.go:168] "Request Body" body=""
	I1213 10:46:52.773837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:52.774189  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:52.774245  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:53.273910  390588 type.go:168] "Request Body" body=""
	I1213 10:46:53.273985  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:53.274313  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:53.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:46:53.773801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:53.774114  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:54.273799  390588 type.go:168] "Request Body" body=""
	I1213 10:46:54.273883  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:54.274242  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:54.773831  390588 type.go:168] "Request Body" body=""
	I1213 10:46:54.773908  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:54.774273  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:54.774330  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:55.273935  390588 type.go:168] "Request Body" body=""
	I1213 10:46:55.274002  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:55.274280  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:55.773763  390588 type.go:168] "Request Body" body=""
	I1213 10:46:55.773841  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:55.774166  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:56.273719  390588 type.go:168] "Request Body" body=""
	I1213 10:46:56.273793  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:56.274128  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:56.774284  390588 type.go:168] "Request Body" body=""
	I1213 10:46:56.774353  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:56.774609  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:56.774649  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:57.274349  390588 type.go:168] "Request Body" body=""
	I1213 10:46:57.274429  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:57.274756  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:57.774568  390588 type.go:168] "Request Body" body=""
	I1213 10:46:57.774644  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:57.774981  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:58.274491  390588 type.go:168] "Request Body" body=""
	I1213 10:46:58.274570  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:58.274873  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:58.774677  390588 type.go:168] "Request Body" body=""
	I1213 10:46:58.774750  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:58.775093  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:46:58.775146  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:46:59.273671  390588 type.go:168] "Request Body" body=""
	I1213 10:46:59.273746  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:59.274092  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:46:59.773709  390588 type.go:168] "Request Body" body=""
	I1213 10:46:59.773787  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:46:59.774109  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:00.273858  390588 type.go:168] "Request Body" body=""
	I1213 10:47:00.273965  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:00.274284  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:00.774431  390588 type.go:168] "Request Body" body=""
	I1213 10:47:00.774530  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:00.774877  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:01.273680  390588 type.go:168] "Request Body" body=""
	I1213 10:47:01.273746  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:01.274056  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:01.274104  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:01.773802  390588 type.go:168] "Request Body" body=""
	I1213 10:47:01.773895  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:01.774231  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:02.273805  390588 type.go:168] "Request Body" body=""
	I1213 10:47:02.273883  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:02.274188  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:02.773731  390588 type.go:168] "Request Body" body=""
	I1213 10:47:02.773820  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:02.774149  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:03.273795  390588 type.go:168] "Request Body" body=""
	I1213 10:47:03.273876  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:03.274215  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:03.274268  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:03.773789  390588 type.go:168] "Request Body" body=""
	I1213 10:47:03.773879  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:03.774219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:04.274436  390588 type.go:168] "Request Body" body=""
	I1213 10:47:04.274533  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:04.274808  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:04.774597  390588 type.go:168] "Request Body" body=""
	I1213 10:47:04.774676  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:04.775027  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:05.273736  390588 type.go:168] "Request Body" body=""
	I1213 10:47:05.273815  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:05.274179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:05.773856  390588 type.go:168] "Request Body" body=""
	I1213 10:47:05.773934  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:05.774190  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:05.774242  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:06.273720  390588 type.go:168] "Request Body" body=""
	I1213 10:47:06.273796  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:06.274139  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:06.773856  390588 type.go:168] "Request Body" body=""
	I1213 10:47:06.773936  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:06.774268  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:07.274469  390588 type.go:168] "Request Body" body=""
	I1213 10:47:07.274550  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:07.274856  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:07.774641  390588 type.go:168] "Request Body" body=""
	I1213 10:47:07.774724  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:07.775047  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:07.775098  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:08.273769  390588 type.go:168] "Request Body" body=""
	I1213 10:47:08.273853  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:08.274179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:08.773674  390588 type.go:168] "Request Body" body=""
	I1213 10:47:08.773747  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:08.773993  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:09.273756  390588 type.go:168] "Request Body" body=""
	I1213 10:47:09.273885  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:09.274246  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:09.773763  390588 type.go:168] "Request Body" body=""
	I1213 10:47:09.773845  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:09.774186  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:10.274330  390588 type.go:168] "Request Body" body=""
	I1213 10:47:10.274409  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:10.274689  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:10.274730  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:10.774642  390588 type.go:168] "Request Body" body=""
	I1213 10:47:10.774724  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:10.775070  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:11.273743  390588 type.go:168] "Request Body" body=""
	I1213 10:47:11.273826  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:11.274166  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:11.773673  390588 type.go:168] "Request Body" body=""
	I1213 10:47:11.773751  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:11.774001  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:12.273773  390588 type.go:168] "Request Body" body=""
	I1213 10:47:12.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:12.274233  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:12.773795  390588 type.go:168] "Request Body" body=""
	I1213 10:47:12.773878  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:12.774221  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:12.774276  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:13.273922  390588 type.go:168] "Request Body" body=""
	I1213 10:47:13.273993  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:13.274301  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:13.773767  390588 type.go:168] "Request Body" body=""
	I1213 10:47:13.773837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:13.774158  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:14.273877  390588 type.go:168] "Request Body" body=""
	I1213 10:47:14.273952  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:14.274297  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:14.773969  390588 type.go:168] "Request Body" body=""
	I1213 10:47:14.774038  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:14.774294  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:14.774335  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:15.273792  390588 type.go:168] "Request Body" body=""
	I1213 10:47:15.273867  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:15.274192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:15.773783  390588 type.go:168] "Request Body" body=""
	I1213 10:47:15.773859  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:15.774205  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:16.273875  390588 type.go:168] "Request Body" body=""
	I1213 10:47:16.273951  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:16.274219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:16.773783  390588 type.go:168] "Request Body" body=""
	I1213 10:47:16.773856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:16.775023  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1213 10:47:16.775086  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:17.273732  390588 type.go:168] "Request Body" body=""
	I1213 10:47:17.273805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:17.274097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:17.773664  390588 type.go:168] "Request Body" body=""
	I1213 10:47:17.773749  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:17.774040  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:18.273790  390588 type.go:168] "Request Body" body=""
	I1213 10:47:18.273880  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:18.274223  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:18.773754  390588 type.go:168] "Request Body" body=""
	I1213 10:47:18.773831  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:18.774146  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:19.273714  390588 type.go:168] "Request Body" body=""
	I1213 10:47:19.273784  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:19.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:19.274151  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:19.773784  390588 type.go:168] "Request Body" body=""
	I1213 10:47:19.773873  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:19.774244  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:20.273959  390588 type.go:168] "Request Body" body=""
	I1213 10:47:20.274044  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:20.274394  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:20.774250  390588 type.go:168] "Request Body" body=""
	I1213 10:47:20.774369  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:20.774676  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:21.274708  390588 type.go:168] "Request Body" body=""
	I1213 10:47:21.274781  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:21.275080  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:21.275128  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:21.773729  390588 type.go:168] "Request Body" body=""
	I1213 10:47:21.773812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:21.774174  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:22.273722  390588 type.go:168] "Request Body" body=""
	I1213 10:47:22.273821  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:22.274131  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:22.773835  390588 type.go:168] "Request Body" body=""
	I1213 10:47:22.773910  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:22.774224  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:23.273772  390588 type.go:168] "Request Body" body=""
	I1213 10:47:23.273864  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:23.274153  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:23.774583  390588 type.go:168] "Request Body" body=""
	I1213 10:47:23.774658  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:23.774922  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:23.774974  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:24.274727  390588 type.go:168] "Request Body" body=""
	I1213 10:47:24.274797  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:24.275112  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:24.773773  390588 type.go:168] "Request Body" body=""
	I1213 10:47:24.773868  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:24.774190  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:25.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:47:25.273794  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:25.274148  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:25.773763  390588 type.go:168] "Request Body" body=""
	I1213 10:47:25.773845  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:25.774201  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:26.273894  390588 type.go:168] "Request Body" body=""
	I1213 10:47:26.273970  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:26.274304  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:26.274358  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:26.773709  390588 type.go:168] "Request Body" body=""
	I1213 10:47:26.773784  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:26.774082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:27.273776  390588 type.go:168] "Request Body" body=""
	I1213 10:47:27.273856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:27.274198  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:27.773769  390588 type.go:168] "Request Body" body=""
	I1213 10:47:27.773862  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:27.774181  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:28.273908  390588 type.go:168] "Request Body" body=""
	I1213 10:47:28.273980  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:28.274246  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:28.773791  390588 type.go:168] "Request Body" body=""
	I1213 10:47:28.773871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:28.774221  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:28.774280  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:29.273783  390588 type.go:168] "Request Body" body=""
	I1213 10:47:29.273866  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:29.274195  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:29.773879  390588 type.go:168] "Request Body" body=""
	I1213 10:47:29.773954  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:29.774220  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:30.273792  390588 type.go:168] "Request Body" body=""
	I1213 10:47:30.273887  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:30.274239  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:30.774640  390588 type.go:168] "Request Body" body=""
	I1213 10:47:30.774719  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:30.775063  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:30.775117  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:31.273664  390588 type.go:168] "Request Body" body=""
	I1213 10:47:31.273730  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:31.273976  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:31.773680  390588 type.go:168] "Request Body" body=""
	I1213 10:47:31.773753  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:31.774074  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:32.273770  390588 type.go:168] "Request Body" body=""
	I1213 10:47:32.273856  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:32.274200  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:32.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:47:32.773840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:32.774155  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:33.273743  390588 type.go:168] "Request Body" body=""
	I1213 10:47:33.273816  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:33.274165  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:33.274237  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:33.773778  390588 type.go:168] "Request Body" body=""
	I1213 10:47:33.773853  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:33.774193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:34.273877  390588 type.go:168] "Request Body" body=""
	I1213 10:47:34.273952  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:34.274209  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:34.773757  390588 type.go:168] "Request Body" body=""
	I1213 10:47:34.773829  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:34.774154  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:35.273734  390588 type.go:168] "Request Body" body=""
	I1213 10:47:35.273810  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:35.274170  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:35.773845  390588 type.go:168] "Request Body" body=""
	I1213 10:47:35.773920  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:35.774173  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:35.774222  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:36.273675  390588 type.go:168] "Request Body" body=""
	I1213 10:47:36.273750  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:36.274088  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:36.773810  390588 type.go:168] "Request Body" body=""
	I1213 10:47:36.773886  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:36.774215  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:37.273714  390588 type.go:168] "Request Body" body=""
	I1213 10:47:37.273797  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:37.274138  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:37.773767  390588 type.go:168] "Request Body" body=""
	I1213 10:47:37.773861  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:37.774225  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:37.774283  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:38.273949  390588 type.go:168] "Request Body" body=""
	I1213 10:47:38.274035  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:38.274379  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:38.774693  390588 type.go:168] "Request Body" body=""
	I1213 10:47:38.774771  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:38.775056  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:39.273772  390588 type.go:168] "Request Body" body=""
	I1213 10:47:39.273858  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:39.274236  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:39.773832  390588 type.go:168] "Request Body" body=""
	I1213 10:47:39.773906  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:39.774253  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:39.774308  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:40.274521  390588 type.go:168] "Request Body" body=""
	I1213 10:47:40.274596  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:40.274862  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:40.774685  390588 type.go:168] "Request Body" body=""
	I1213 10:47:40.774759  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:40.775099  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:41.273778  390588 type.go:168] "Request Body" body=""
	I1213 10:47:41.273854  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:41.274171  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:41.773727  390588 type.go:168] "Request Body" body=""
	I1213 10:47:41.773800  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:41.774113  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:42.273838  390588 type.go:168] "Request Body" body=""
	I1213 10:47:42.273925  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:42.274281  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:42.274339  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:42.773878  390588 type.go:168] "Request Body" body=""
	I1213 10:47:42.773968  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:42.774283  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:43.273946  390588 type.go:168] "Request Body" body=""
	I1213 10:47:43.274019  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:43.274334  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:43.773751  390588 type.go:168] "Request Body" body=""
	I1213 10:47:43.773829  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:43.774150  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:44.273757  390588 type.go:168] "Request Body" body=""
	I1213 10:47:44.273838  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:44.274183  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:44.773782  390588 type.go:168] "Request Body" body=""
	I1213 10:47:44.773864  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:44.774198  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:44.774253  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:45.273924  390588 type.go:168] "Request Body" body=""
	I1213 10:47:45.274004  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:45.274419  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:45.773843  390588 type.go:168] "Request Body" body=""
	I1213 10:47:45.773923  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:45.774295  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:46.273961  390588 type.go:168] "Request Body" body=""
	I1213 10:47:46.274029  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:46.274287  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:46.773789  390588 type.go:168] "Request Body" body=""
	I1213 10:47:46.773869  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:46.774227  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:46.774283  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:47.273961  390588 type.go:168] "Request Body" body=""
	I1213 10:47:47.274043  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:47.274393  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:47.773713  390588 type.go:168] "Request Body" body=""
	I1213 10:47:47.773795  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:47.774076  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:48.273777  390588 type.go:168] "Request Body" body=""
	I1213 10:47:48.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:48.274213  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:48.773914  390588 type.go:168] "Request Body" body=""
	I1213 10:47:48.773990  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:48.774305  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:48.774364  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:49.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:47:49.273791  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:49.274082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:49.773785  390588 type.go:168] "Request Body" body=""
	I1213 10:47:49.773866  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:49.774184  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:50.273769  390588 type.go:168] "Request Body" body=""
	I1213 10:47:50.273849  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:50.274190  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:50.774233  390588 type.go:168] "Request Body" body=""
	I1213 10:47:50.774309  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:50.774588  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:50.774631  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:51.274650  390588 type.go:168] "Request Body" body=""
	I1213 10:47:51.274724  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:51.275059  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:51.773796  390588 type.go:168] "Request Body" body=""
	I1213 10:47:51.773878  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:51.774236  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:52.274456  390588 type.go:168] "Request Body" body=""
	I1213 10:47:52.274538  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:52.274799  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:52.774588  390588 type.go:168] "Request Body" body=""
	I1213 10:47:52.774666  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:52.775007  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:52.775061  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:53.273753  390588 type.go:168] "Request Body" body=""
	I1213 10:47:53.273833  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:53.274191  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:53.773675  390588 type.go:168] "Request Body" body=""
	I1213 10:47:53.773745  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:53.774008  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:54.273722  390588 type.go:168] "Request Body" body=""
	I1213 10:47:54.273801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:54.274131  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:54.773868  390588 type.go:168] "Request Body" body=""
	I1213 10:47:54.773943  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:54.774296  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:55.273989  390588 type.go:168] "Request Body" body=""
	I1213 10:47:55.274065  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:55.274332  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:55.274372  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:55.774037  390588 type.go:168] "Request Body" body=""
	I1213 10:47:55.774114  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:55.774457  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:56.274294  390588 type.go:168] "Request Body" body=""
	I1213 10:47:56.274368  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:56.274696  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:56.774209  390588 type.go:168] "Request Body" body=""
	I1213 10:47:56.774284  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:56.774573  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:57.274365  390588 type.go:168] "Request Body" body=""
	I1213 10:47:57.274443  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:57.274796  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:57.274856  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:57.774615  390588 type.go:168] "Request Body" body=""
	I1213 10:47:57.774691  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:57.775029  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:58.274293  390588 type.go:168] "Request Body" body=""
	I1213 10:47:58.274363  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:58.274642  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:58.774411  390588 type.go:168] "Request Body" body=""
	I1213 10:47:58.774519  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:58.774841  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:47:59.274495  390588 type.go:168] "Request Body" body=""
	I1213 10:47:59.274571  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:59.274905  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:47:59.274961  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:47:59.774120  390588 type.go:168] "Request Body" body=""
	I1213 10:47:59.774186  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:47:59.774529  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:00.274587  390588 type.go:168] "Request Body" body=""
	I1213 10:48:00.274674  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:00.275002  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:00.773691  390588 type.go:168] "Request Body" body=""
	I1213 10:48:00.773785  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:00.774128  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:01.273694  390588 type.go:168] "Request Body" body=""
	I1213 10:48:01.273766  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:01.274084  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:01.773820  390588 type.go:168] "Request Body" body=""
	I1213 10:48:01.773905  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:01.774301  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:01.774362  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:02.273866  390588 type.go:168] "Request Body" body=""
	I1213 10:48:02.273943  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:02.274265  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:02.773719  390588 type.go:168] "Request Body" body=""
	I1213 10:48:02.773929  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:02.774221  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:03.273773  390588 type.go:168] "Request Body" body=""
	I1213 10:48:03.273855  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:03.274182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:03.773768  390588 type.go:168] "Request Body" body=""
	I1213 10:48:03.773848  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:03.774192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:04.274348  390588 type.go:168] "Request Body" body=""
	I1213 10:48:04.274421  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:04.274701  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:04.274747  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:04.774520  390588 type.go:168] "Request Body" body=""
	I1213 10:48:04.774598  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:04.774955  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:05.274625  390588 type.go:168] "Request Body" body=""
	I1213 10:48:05.274699  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:05.275061  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:05.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:48:05.773840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:05.774191  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:06.273741  390588 type.go:168] "Request Body" body=""
	I1213 10:48:06.273822  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:06.274167  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:06.773880  390588 type.go:168] "Request Body" body=""
	I1213 10:48:06.773956  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:06.774280  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:06.774339  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:07.273666  390588 type.go:168] "Request Body" body=""
	I1213 10:48:07.273739  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:07.274015  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:07.773765  390588 type.go:168] "Request Body" body=""
	I1213 10:48:07.773867  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:07.774227  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:08.273802  390588 type.go:168] "Request Body" body=""
	I1213 10:48:08.273887  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:08.274253  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:08.774404  390588 type.go:168] "Request Body" body=""
	I1213 10:48:08.774472  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:08.774731  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:08.774771  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:09.274521  390588 type.go:168] "Request Body" body=""
	I1213 10:48:09.274602  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:09.274979  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:09.774731  390588 type.go:168] "Request Body" body=""
	I1213 10:48:09.774819  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:09.775148  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:10.274501  390588 type.go:168] "Request Body" body=""
	I1213 10:48:10.274577  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:10.274825  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:10.774685  390588 type.go:168] "Request Body" body=""
	I1213 10:48:10.774760  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:10.775071  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:10.775127  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:11.273657  390588 type.go:168] "Request Body" body=""
	I1213 10:48:11.273737  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:11.274080  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:11.774554  390588 type.go:168] "Request Body" body=""
	I1213 10:48:11.774619  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:11.774916  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:12.274606  390588 type.go:168] "Request Body" body=""
	I1213 10:48:12.274685  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:12.275008  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:12.773772  390588 type.go:168] "Request Body" body=""
	I1213 10:48:12.773849  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:12.774196  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:13.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:48:13.273790  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:13.274085  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:13.274132  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:13.773694  390588 type.go:168] "Request Body" body=""
	I1213 10:48:13.773768  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:13.774050  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:14.273699  390588 type.go:168] "Request Body" body=""
	I1213 10:48:14.273776  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:14.274097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:14.773688  390588 type.go:168] "Request Body" body=""
	I1213 10:48:14.773757  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:14.774016  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:15.273762  390588 type.go:168] "Request Body" body=""
	I1213 10:48:15.273837  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:15.274160  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:15.274217  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:15.773796  390588 type.go:168] "Request Body" body=""
	I1213 10:48:15.773874  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:15.774220  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:16.273918  390588 type.go:168] "Request Body" body=""
	I1213 10:48:16.274004  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:16.274258  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:16.773913  390588 type.go:168] "Request Body" body=""
	I1213 10:48:16.773993  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:16.774333  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:17.273914  390588 type.go:168] "Request Body" body=""
	I1213 10:48:17.273989  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:17.274304  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:17.274360  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:17.773705  390588 type.go:168] "Request Body" body=""
	I1213 10:48:17.773779  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:17.774047  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:18.273778  390588 type.go:168] "Request Body" body=""
	I1213 10:48:18.273857  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:18.274175  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:18.773780  390588 type.go:168] "Request Body" body=""
	I1213 10:48:18.773874  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:18.774242  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:19.274520  390588 type.go:168] "Request Body" body=""
	I1213 10:48:19.274589  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:19.274852  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:19.274893  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:19.774642  390588 type.go:168] "Request Body" body=""
	I1213 10:48:19.774722  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:19.775081  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:20.273688  390588 type.go:168] "Request Body" body=""
	I1213 10:48:20.273761  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:20.274090  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:20.773877  390588 type.go:168] "Request Body" body=""
	I1213 10:48:20.773951  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:20.774252  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:21.274225  390588 type.go:168] "Request Body" body=""
	I1213 10:48:21.274303  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:21.274658  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:21.774461  390588 type.go:168] "Request Body" body=""
	I1213 10:48:21.774542  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:21.774931  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:21.774990  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:22.273646  390588 type.go:168] "Request Body" body=""
	I1213 10:48:22.273719  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:22.273971  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:22.773678  390588 type.go:168] "Request Body" body=""
	I1213 10:48:22.773773  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:22.774157  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:23.273879  390588 type.go:168] "Request Body" body=""
	I1213 10:48:23.273951  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:23.274270  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:23.774466  390588 type.go:168] "Request Body" body=""
	I1213 10:48:23.774555  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:23.774828  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:24.274703  390588 type.go:168] "Request Body" body=""
	I1213 10:48:24.274778  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:24.275113  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:24.275166  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:24.773777  390588 type.go:168] "Request Body" body=""
	I1213 10:48:24.773853  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:24.774193  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:25.273716  390588 type.go:168] "Request Body" body=""
	I1213 10:48:25.273787  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:25.274055  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:25.773749  390588 type.go:168] "Request Body" body=""
	I1213 10:48:25.773830  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:25.774156  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:26.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:48:26.273812  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:26.274134  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:26.774405  390588 type.go:168] "Request Body" body=""
	I1213 10:48:26.774477  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:26.774735  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:26.774777  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:27.274550  390588 type.go:168] "Request Body" body=""
	I1213 10:48:27.274638  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:27.274990  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:27.773699  390588 type.go:168] "Request Body" body=""
	I1213 10:48:27.773775  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:27.774125  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:28.274454  390588 type.go:168] "Request Body" body=""
	I1213 10:48:28.274531  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:28.274852  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:28.774642  390588 type.go:168] "Request Body" body=""
	I1213 10:48:28.774713  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:28.775023  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:28.775072  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:29.273765  390588 type.go:168] "Request Body" body=""
	I1213 10:48:29.273840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:29.274166  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:29.773685  390588 type.go:168] "Request Body" body=""
	I1213 10:48:29.773767  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:29.774067  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:30.273776  390588 type.go:168] "Request Body" body=""
	I1213 10:48:30.273858  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:30.274172  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:30.773721  390588 type.go:168] "Request Body" body=""
	I1213 10:48:30.773801  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:30.774182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:31.273888  390588 type.go:168] "Request Body" body=""
	I1213 10:48:31.273960  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:31.274245  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:31.274287  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:31.773960  390588 type.go:168] "Request Body" body=""
	I1213 10:48:31.774033  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:31.774353  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:32.273790  390588 type.go:168] "Request Body" body=""
	I1213 10:48:32.273874  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:32.274212  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:32.773736  390588 type.go:168] "Request Body" body=""
	I1213 10:48:32.773805  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:32.774110  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:33.273773  390588 type.go:168] "Request Body" body=""
	I1213 10:48:33.273854  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:33.274167  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:33.773768  390588 type.go:168] "Request Body" body=""
	I1213 10:48:33.773850  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:33.774195  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:33.774250  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:34.274441  390588 type.go:168] "Request Body" body=""
	I1213 10:48:34.274551  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:34.274859  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:34.774530  390588 type.go:168] "Request Body" body=""
	I1213 10:48:34.774653  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:34.774994  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:35.273709  390588 type.go:168] "Request Body" body=""
	I1213 10:48:35.273790  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:35.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:35.773804  390588 type.go:168] "Request Body" body=""
	I1213 10:48:35.773871  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:35.774121  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:36.273709  390588 type.go:168] "Request Body" body=""
	I1213 10:48:36.273787  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:36.274129  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:36.274191  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:36.773868  390588 type.go:168] "Request Body" body=""
	I1213 10:48:36.773953  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:36.774291  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:37.273713  390588 type.go:168] "Request Body" body=""
	I1213 10:48:37.273782  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:37.274052  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:37.773728  390588 type.go:168] "Request Body" body=""
	I1213 10:48:37.773807  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:37.774133  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:38.273695  390588 type.go:168] "Request Body" body=""
	I1213 10:48:38.273771  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:38.274096  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:38.774434  390588 type.go:168] "Request Body" body=""
	I1213 10:48:38.774523  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:38.774857  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:38.774915  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:39.274697  390588 type.go:168] "Request Body" body=""
	I1213 10:48:39.274775  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:39.275116  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:39.773799  390588 type.go:168] "Request Body" body=""
	I1213 10:48:39.773875  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:39.774219  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:40.274392  390588 type.go:168] "Request Body" body=""
	I1213 10:48:40.274461  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:40.274778  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:40.774600  390588 type.go:168] "Request Body" body=""
	I1213 10:48:40.774675  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:40.774999  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:40.775056  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:41.273683  390588 type.go:168] "Request Body" body=""
	I1213 10:48:41.273758  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:41.274099  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:41.774223  390588 type.go:168] "Request Body" body=""
	I1213 10:48:41.774306  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:41.774579  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:42.274405  390588 type.go:168] "Request Body" body=""
	I1213 10:48:42.274535  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:42.274934  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:42.774574  390588 type.go:168] "Request Body" body=""
	I1213 10:48:42.774658  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:42.775003  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:43.273697  390588 type.go:168] "Request Body" body=""
	I1213 10:48:43.273772  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:43.274034  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:43.274076  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:43.773741  390588 type.go:168] "Request Body" body=""
	I1213 10:48:43.773825  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:43.774164  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:44.273866  390588 type.go:168] "Request Body" body=""
	I1213 10:48:44.273947  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:44.274284  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:44.773701  390588 type.go:168] "Request Body" body=""
	I1213 10:48:44.773793  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:44.774141  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:45.273825  390588 type.go:168] "Request Body" body=""
	I1213 10:48:45.273925  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:45.274348  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:45.274406  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:45.774078  390588 type.go:168] "Request Body" body=""
	I1213 10:48:45.774155  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:45.774567  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:46.274333  390588 type.go:168] "Request Body" body=""
	I1213 10:48:46.274401  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:46.274668  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:46.774394  390588 type.go:168] "Request Body" body=""
	I1213 10:48:46.774466  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:46.774810  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:47.274617  390588 type.go:168] "Request Body" body=""
	I1213 10:48:47.274705  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:47.275033  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:47.275083  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:47.774292  390588 type.go:168] "Request Body" body=""
	I1213 10:48:47.774364  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:47.774696  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:48.274508  390588 type.go:168] "Request Body" body=""
	I1213 10:48:48.274590  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:48.274935  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:48.774610  390588 type.go:168] "Request Body" body=""
	I1213 10:48:48.774685  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:48.775020  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:49.273715  390588 type.go:168] "Request Body" body=""
	I1213 10:48:49.273781  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:49.274042  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:49.773747  390588 type.go:168] "Request Body" body=""
	I1213 10:48:49.773829  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:49.774155  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:49.774228  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:50.273926  390588 type.go:168] "Request Body" body=""
	I1213 10:48:50.274002  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:50.274364  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:50.774202  390588 type.go:168] "Request Body" body=""
	I1213 10:48:50.774276  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:50.774536  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:51.274422  390588 type.go:168] "Request Body" body=""
	I1213 10:48:51.274498  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:51.274822  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:51.774623  390588 type.go:168] "Request Body" body=""
	I1213 10:48:51.774699  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:51.775050  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:51.775104  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:52.273779  390588 type.go:168] "Request Body" body=""
	I1213 10:48:52.273845  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:52.274097  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:52.773759  390588 type.go:168] "Request Body" body=""
	I1213 10:48:52.773834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:52.774161  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:53.273848  390588 type.go:168] "Request Body" body=""
	I1213 10:48:53.273927  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:53.274265  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:53.773713  390588 type.go:168] "Request Body" body=""
	I1213 10:48:53.773788  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:53.774090  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:54.273763  390588 type.go:168] "Request Body" body=""
	I1213 10:48:54.273840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:54.274182  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:54.274238  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:54.773755  390588 type.go:168] "Request Body" body=""
	I1213 10:48:54.773839  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:54.774143  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:55.273671  390588 type.go:168] "Request Body" body=""
	I1213 10:48:55.273739  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:55.273994  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:55.773662  390588 type.go:168] "Request Body" body=""
	I1213 10:48:55.773743  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:55.774113  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:56.274020  390588 type.go:168] "Request Body" body=""
	I1213 10:48:56.274092  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:56.274398  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:56.274455  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:56.773718  390588 type.go:168] "Request Body" body=""
	I1213 10:48:56.773786  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:56.774114  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:57.273796  390588 type.go:168] "Request Body" body=""
	I1213 10:48:57.273875  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:57.274202  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:57.773898  390588 type.go:168] "Request Body" body=""
	I1213 10:48:57.773979  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:57.774308  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:58.273718  390588 type.go:168] "Request Body" body=""
	I1213 10:48:58.273790  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:58.274114  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:58.773788  390588 type.go:168] "Request Body" body=""
	I1213 10:48:58.773908  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:58.774247  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:48:58.774302  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:48:59.273809  390588 type.go:168] "Request Body" body=""
	I1213 10:48:59.273892  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:59.274236  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:48:59.773708  390588 type.go:168] "Request Body" body=""
	I1213 10:48:59.773786  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:48:59.774102  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:00.273835  390588 type.go:168] "Request Body" body=""
	I1213 10:49:00.273945  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:00.274259  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:00.774386  390588 type.go:168] "Request Body" body=""
	I1213 10:49:00.774468  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:00.774788  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:00.774843  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:01.274715  390588 type.go:168] "Request Body" body=""
	I1213 10:49:01.274784  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:01.275080  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:01.773784  390588 type.go:168] "Request Body" body=""
	I1213 10:49:01.773863  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:01.774155  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:02.273798  390588 type.go:168] "Request Body" body=""
	I1213 10:49:02.273897  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:02.274252  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:02.773815  390588 type.go:168] "Request Body" body=""
	I1213 10:49:02.773883  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:02.774152  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:03.273838  390588 type.go:168] "Request Body" body=""
	I1213 10:49:03.273923  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:03.274294  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:03.274348  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:03.773866  390588 type.go:168] "Request Body" body=""
	I1213 10:49:03.773946  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:03.774285  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:04.273977  390588 type.go:168] "Request Body" body=""
	I1213 10:49:04.274050  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:04.274314  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:04.773758  390588 type.go:168] "Request Body" body=""
	I1213 10:49:04.773838  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:04.774178  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:05.273888  390588 type.go:168] "Request Body" body=""
	I1213 10:49:05.273962  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:05.274293  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:05.773962  390588 type.go:168] "Request Body" body=""
	I1213 10:49:05.774033  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:05.774279  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:05.774317  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:06.274277  390588 type.go:168] "Request Body" body=""
	I1213 10:49:06.274357  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:06.274684  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:06.774350  390588 type.go:168] "Request Body" body=""
	I1213 10:49:06.774429  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:06.774754  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:07.274072  390588 type.go:168] "Request Body" body=""
	I1213 10:49:07.274145  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:07.274401  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:07.773761  390588 type.go:168] "Request Body" body=""
	I1213 10:49:07.773839  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:07.774168  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:08.273771  390588 type.go:168] "Request Body" body=""
	I1213 10:49:08.273852  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:08.274170  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:08.274229  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:08.773716  390588 type.go:168] "Request Body" body=""
	I1213 10:49:08.773793  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:08.774102  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:09.273765  390588 type.go:168] "Request Body" body=""
	I1213 10:49:09.273840  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:09.274179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:09.773911  390588 type.go:168] "Request Body" body=""
	I1213 10:49:09.773987  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:09.774329  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:10.274643  390588 type.go:168] "Request Body" body=""
	I1213 10:49:10.274715  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:10.275018  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:10.275073  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:10.774631  390588 type.go:168] "Request Body" body=""
	I1213 10:49:10.774708  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:10.775082  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:11.273712  390588 type.go:168] "Request Body" body=""
	I1213 10:49:11.273785  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:11.274118  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:11.773811  390588 type.go:168] "Request Body" body=""
	I1213 10:49:11.773881  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:11.774141  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:12.273785  390588 type.go:168] "Request Body" body=""
	I1213 10:49:12.273860  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:12.274192  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:12.773779  390588 type.go:168] "Request Body" body=""
	I1213 10:49:12.773865  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:12.774208  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:12.774264  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:13.274414  390588 type.go:168] "Request Body" body=""
	I1213 10:49:13.274491  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:13.274806  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:13.774595  390588 type.go:168] "Request Body" body=""
	I1213 10:49:13.774673  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:13.775019  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:14.274700  390588 type.go:168] "Request Body" body=""
	I1213 10:49:14.274776  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:14.275122  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:14.773666  390588 type.go:168] "Request Body" body=""
	I1213 10:49:14.773732  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:14.773982  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:15.273683  390588 type.go:168] "Request Body" body=""
	I1213 10:49:15.273760  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:15.274100  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:15.274153  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:15.773812  390588 type.go:168] "Request Body" body=""
	I1213 10:49:15.773895  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:15.774230  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:16.273920  390588 type.go:168] "Request Body" body=""
	I1213 10:49:16.273995  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:16.274253  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:16.773782  390588 type.go:168] "Request Body" body=""
	I1213 10:49:16.773868  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:16.774406  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:17.274090  390588 type.go:168] "Request Body" body=""
	I1213 10:49:17.274171  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:17.274528  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:17.274584  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:17.774247  390588 type.go:168] "Request Body" body=""
	I1213 10:49:17.774320  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:17.774585  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:18.274376  390588 type.go:168] "Request Body" body=""
	I1213 10:49:18.274452  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:18.274800  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:18.774498  390588 type.go:168] "Request Body" body=""
	I1213 10:49:18.774575  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:18.774922  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:19.274279  390588 type.go:168] "Request Body" body=""
	I1213 10:49:19.274351  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:19.274659  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1213 10:49:19.274729  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-407525": dial tcp 192.168.49.2:8441: connect: connection refused
	I1213 10:49:19.774509  390588 type.go:168] "Request Body" body=""
	I1213 10:49:19.774592  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:19.774934  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:20.273655  390588 type.go:168] "Request Body" body=""
	I1213 10:49:20.273729  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:20.274058  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:20.773657  390588 type.go:168] "Request Body" body=""
	I1213 10:49:20.773723  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:20.773970  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:21.273725  390588 type.go:168] "Request Body" body=""
	I1213 10:49:21.273834  390588 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-407525" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1213 10:49:21.274179  390588 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1213 10:49:21.773895  390588 type.go:168] "Request Body" body=""
	W1213 10:49:21.773963  390588 node_ready.go:55] error getting node "functional-407525" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded
	I1213 10:49:21.773982  390588 node_ready.go:38] duration metric: took 6m0.000438977s for node "functional-407525" to be "Ready" ...
	I1213 10:49:21.777070  390588 out.go:203] 
	W1213 10:49:21.779923  390588 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 10:49:21.779945  390588 out.go:285] * 
	W1213 10:49:21.782066  390588 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 10:49:21.784854  390588 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 10:49:30 functional-407525 crio[5356]: time="2025-12-13T10:49:30.441092452Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=ad0770d6-46ee-472d-84e2-b52693efc812 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:31 functional-407525 crio[5356]: time="2025-12-13T10:49:31.482515404Z" level=info msg="Checking image status: minikube-local-cache-test:functional-407525" id=4068301b-964f-4f0e-b837-bce95c5d9dbc name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:31 functional-407525 crio[5356]: time="2025-12-13T10:49:31.482692702Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 13 10:49:31 functional-407525 crio[5356]: time="2025-12-13T10:49:31.482736034Z" level=info msg="Image minikube-local-cache-test:functional-407525 not found" id=4068301b-964f-4f0e-b837-bce95c5d9dbc name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:31 functional-407525 crio[5356]: time="2025-12-13T10:49:31.48281121Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-407525 found" id=4068301b-964f-4f0e-b837-bce95c5d9dbc name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:31 functional-407525 crio[5356]: time="2025-12-13T10:49:31.506781096Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-407525" id=1aa416ce-74b2-46a5-985c-573303b662d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:31 functional-407525 crio[5356]: time="2025-12-13T10:49:31.506938702Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-407525 not found" id=1aa416ce-74b2-46a5-985c-573303b662d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:31 functional-407525 crio[5356]: time="2025-12-13T10:49:31.506985176Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-407525 found" id=1aa416ce-74b2-46a5-985c-573303b662d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:31 functional-407525 crio[5356]: time="2025-12-13T10:49:31.533925898Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-407525" id=abe4743f-552f-4861-aa05-f9564df92fcd name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:31 functional-407525 crio[5356]: time="2025-12-13T10:49:31.534083545Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-407525 not found" id=abe4743f-552f-4861-aa05-f9564df92fcd name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:31 functional-407525 crio[5356]: time="2025-12-13T10:49:31.534138421Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-407525 found" id=abe4743f-552f-4861-aa05-f9564df92fcd name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:32 functional-407525 crio[5356]: time="2025-12-13T10:49:32.502935443Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=492be620-e0b9-4142-8062-1456f326837a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:32 functional-407525 crio[5356]: time="2025-12-13T10:49:32.842412631Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=a07e4731-b6a1-41b2-b48e-4664da1902b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:32 functional-407525 crio[5356]: time="2025-12-13T10:49:32.842569441Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=a07e4731-b6a1-41b2-b48e-4664da1902b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:32 functional-407525 crio[5356]: time="2025-12-13T10:49:32.842610024Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=a07e4731-b6a1-41b2-b48e-4664da1902b8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:33 functional-407525 crio[5356]: time="2025-12-13T10:49:33.389847421Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=9fe27fb5-6769-4d03-b269-b0631ee3e4b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:33 functional-407525 crio[5356]: time="2025-12-13T10:49:33.390003361Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=9fe27fb5-6769-4d03-b269-b0631ee3e4b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:33 functional-407525 crio[5356]: time="2025-12-13T10:49:33.390040564Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=9fe27fb5-6769-4d03-b269-b0631ee3e4b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:33 functional-407525 crio[5356]: time="2025-12-13T10:49:33.414987895Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=3326d30a-f9cb-49f4-b206-bf711f6bc60d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:33 functional-407525 crio[5356]: time="2025-12-13T10:49:33.41511309Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=3326d30a-f9cb-49f4-b206-bf711f6bc60d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:33 functional-407525 crio[5356]: time="2025-12-13T10:49:33.41514948Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=3326d30a-f9cb-49f4-b206-bf711f6bc60d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:33 functional-407525 crio[5356]: time="2025-12-13T10:49:33.455900542Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=44bddb36-d858-4379-aceb-38fead06826d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:33 functional-407525 crio[5356]: time="2025-12-13T10:49:33.456029783Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=44bddb36-d858-4379-aceb-38fead06826d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:33 functional-407525 crio[5356]: time="2025-12-13T10:49:33.456072614Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=44bddb36-d858-4379-aceb-38fead06826d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:49:34 functional-407525 crio[5356]: time="2025-12-13T10:49:34.014137748Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=38865e3b-a4f0-4f21-855b-8b4194495f1f name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:49:37.890707    9498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:37.891400    9498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:37.893142    9498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:37.893718    9498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:49:37.895374    9498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	[Dec13 10:24] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:25] overlayfs: idmapped layers are currently not supported
	[  +0.081323] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:31] overlayfs: idmapped layers are currently not supported
	[Dec13 10:32] overlayfs: idmapped layers are currently not supported
	[Dec13 10:42] hrtimer: interrupt took 21684953 ns
	[Dec13 10:49] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:49:37 up  2:32,  0 user,  load average: 0.52, 0.32, 0.73
	Linux functional-407525 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 10:49:35 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:49:35 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1154.
	Dec 13 10:49:35 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:35 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:35 functional-407525 kubelet[9371]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:35 functional-407525 kubelet[9371]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:35 functional-407525 kubelet[9371]: E1213 10:49:35.859222    9371 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:49:35 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:49:35 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:49:36 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1155.
	Dec 13 10:49:36 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:36 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:36 functional-407525 kubelet[9392]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:36 functional-407525 kubelet[9392]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:36 functional-407525 kubelet[9392]: E1213 10:49:36.588523    9392 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:49:36 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:49:36 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 10:49:37 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1156.
	Dec 13 10:49:37 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:37 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 10:49:37 functional-407525 kubelet[9413]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:37 functional-407525 kubelet[9413]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 10:49:37 functional-407525 kubelet[9413]: E1213 10:49:37.330015    9413 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 10:49:37 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 10:49:37 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525: exit status 2 (354.256163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-407525" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-407525 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 10:52:27.930457  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:54:06.648184  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:55:29.711049  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:57:27.930483  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:59:06.647779  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-407525 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m13.414679525s)

                                                
                                                
-- stdout --
	* [functional-407525] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-407525" primary control-plane node in "functional-407525" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000564023s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001143765s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001143765s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-407525 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m13.415976827s for "functional-407525" cluster.
I1213 11:01:52.384811  356328 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-407525
helpers_test.go:244: (dbg) docker inspect functional-407525:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	        "Created": "2025-12-13T10:34:59.162458661Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 385126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:34:59.230276401Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hostname",
	        "HostsPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hosts",
	        "LogPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7-json.log",
	        "Name": "/functional-407525",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-407525:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-407525",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	                "LowerDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-407525",
	                "Source": "/var/lib/docker/volumes/functional-407525/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-407525",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-407525",
	                "name.minikube.sigs.k8s.io": "functional-407525",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb8c72e3de62f4751cebe2c5a489ec3040a7f771c4c912b4414d5eb26c67d8e4",
	            "SandboxKey": "/var/run/docker/netns/fb8c72e3de62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-407525": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:c5:1d:c8:5d:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8bb3fce07852261971da0e26f4e28c90471b6da820443a0b657c0bf09d2f7042",
	                    "EndpointID": "3a907b06ccc449fc18f0cf71710374046514d7011757e3e81bb1c73b267fe8c9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-407525",
	                        "7fc3d6bd328a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525: exit status 2 (307.085509ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-371413 image ls --format yaml --alsologtostderr                                                                                        │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ ssh     │ functional-371413 ssh pgrep buildkitd                                                                                                             │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ image   │ functional-371413 image build -t localhost/my-image:functional-371413 testdata/build --alsologtostderr                                            │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image   │ functional-371413 image ls                                                                                                                        │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image   │ functional-371413 image ls --format json --alsologtostderr                                                                                        │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image   │ functional-371413 image ls --format table --alsologtostderr                                                                                       │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ delete  │ -p functional-371413                                                                                                                              │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ start   │ -p functional-407525 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ start   │ -p functional-407525 --alsologtostderr -v=8                                                                                                       │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:43 UTC │                     │
	│ cache   │ functional-407525 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ functional-407525 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ functional-407525 cache add registry.k8s.io/pause:latest                                                                                          │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ functional-407525 cache add minikube-local-cache-test:functional-407525                                                                           │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ functional-407525 cache delete minikube-local-cache-test:functional-407525                                                                        │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ ssh     │ functional-407525 ssh sudo crictl images                                                                                                          │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ ssh     │ functional-407525 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ ssh     │ functional-407525 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │                     │
	│ cache   │ functional-407525 cache reload                                                                                                                    │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ ssh     │ functional-407525 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ kubectl │ functional-407525 kubectl -- --context functional-407525 get pods                                                                                 │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │                     │
	│ start   │ -p functional-407525 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:49:39
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:49:39.014629  396441 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:49:39.014755  396441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:49:39.014760  396441 out.go:374] Setting ErrFile to fd 2...
	I1213 10:49:39.014764  396441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:49:39.015052  396441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:49:39.015432  396441 out.go:368] Setting JSON to false
	I1213 10:49:39.016356  396441 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9131,"bootTime":1765613848,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:49:39.016423  396441 start.go:143] virtualization:  
	I1213 10:49:39.019850  396441 out.go:179] * [functional-407525] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:49:39.022886  396441 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:49:39.022964  396441 notify.go:221] Checking for updates...
	I1213 10:49:39.029514  396441 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:49:39.032457  396441 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:49:39.035302  396441 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 10:49:39.038191  396441 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:49:39.041178  396441 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:49:39.044626  396441 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:49:39.044735  396441 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:49:39.073132  396441 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:49:39.073240  396441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:49:39.131952  396441 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:49:39.12226015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:49:39.132042  396441 docker.go:319] overlay module found
	I1213 10:49:39.135181  396441 out.go:179] * Using the docker driver based on existing profile
	I1213 10:49:39.138004  396441 start.go:309] selected driver: docker
	I1213 10:49:39.138012  396441 start.go:927] validating driver "docker" against &{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:49:39.138117  396441 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:49:39.138218  396441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:49:39.201683  396441 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:49:39.192871513 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:49:39.202106  396441 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:49:39.202131  396441 cni.go:84] Creating CNI manager for ""
	I1213 10:49:39.202182  396441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:49:39.202230  396441 start.go:353] cluster config:
	{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:49:39.205440  396441 out.go:179] * Starting "functional-407525" primary control-plane node in "functional-407525" cluster
	I1213 10:49:39.208563  396441 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:49:39.211465  396441 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:49:39.214245  396441 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:49:39.214282  396441 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:49:39.214290  396441 cache.go:65] Caching tarball of preloaded images
	I1213 10:49:39.214340  396441 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:49:39.214371  396441 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:49:39.214379  396441 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 10:49:39.214508  396441 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/config.json ...
	I1213 10:49:39.233590  396441 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:49:39.233607  396441 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:49:39.233619  396441 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:49:39.233649  396441 start.go:360] acquireMachinesLock for functional-407525: {Name:mkb9a6ddeb0e93e626919e03dc3c989f045e07da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:49:39.233703  396441 start.go:364] duration metric: took 38.187µs to acquireMachinesLock for "functional-407525"
	I1213 10:49:39.233721  396441 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:49:39.233725  396441 fix.go:54] fixHost starting: 
	I1213 10:49:39.234003  396441 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:49:39.250771  396441 fix.go:112] recreateIfNeeded on functional-407525: state=Running err=<nil>
	W1213 10:49:39.250790  396441 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:49:39.253977  396441 out.go:252] * Updating the running docker "functional-407525" container ...
	I1213 10:49:39.254007  396441 machine.go:94] provisionDockerMachine start ...
	I1213 10:49:39.254089  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:39.270672  396441 main.go:143] libmachine: Using SSH client type: native
	I1213 10:49:39.270992  396441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:49:39.270998  396441 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:49:39.419071  396441 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-407525
	
	I1213 10:49:39.419086  396441 ubuntu.go:182] provisioning hostname "functional-407525"
	I1213 10:49:39.419147  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:39.437001  396441 main.go:143] libmachine: Using SSH client type: native
	I1213 10:49:39.437302  396441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:49:39.437311  396441 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-407525 && echo "functional-407525" | sudo tee /etc/hostname
	I1213 10:49:39.596975  396441 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-407525
	
	I1213 10:49:39.597049  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:39.614748  396441 main.go:143] libmachine: Using SSH client type: native
	I1213 10:49:39.615049  396441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:49:39.615063  396441 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-407525' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-407525/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-407525' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:49:39.763894  396441 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:49:39.763910  396441 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 10:49:39.763930  396441 ubuntu.go:190] setting up certificates
	I1213 10:49:39.763939  396441 provision.go:84] configureAuth start
	I1213 10:49:39.763997  396441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-407525
	I1213 10:49:39.782226  396441 provision.go:143] copyHostCerts
	I1213 10:49:39.782297  396441 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 10:49:39.782308  396441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 10:49:39.782382  396441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 10:49:39.782470  396441 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 10:49:39.782473  396441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 10:49:39.782511  396441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 10:49:39.782561  396441 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 10:49:39.782565  396441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 10:49:39.782587  396441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 10:49:39.782630  396441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.functional-407525 san=[127.0.0.1 192.168.49.2 functional-407525 localhost minikube]
	I1213 10:49:40.264423  396441 provision.go:177] copyRemoteCerts
	I1213 10:49:40.264477  396441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:49:40.264518  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:40.288593  396441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:49:40.395503  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:49:40.413777  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:49:40.432071  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 10:49:40.449556  396441 provision.go:87] duration metric: took 685.604236ms to configureAuth
	I1213 10:49:40.449573  396441 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:49:40.449767  396441 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:49:40.449873  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:40.466720  396441 main.go:143] libmachine: Using SSH client type: native
	I1213 10:49:40.467023  396441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:49:40.467036  396441 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:49:40.812989  396441 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:49:40.813002  396441 machine.go:97] duration metric: took 1.558987505s to provisionDockerMachine
	I1213 10:49:40.813012  396441 start.go:293] postStartSetup for "functional-407525" (driver="docker")
	I1213 10:49:40.813024  396441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:49:40.813085  396441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:49:40.813128  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:40.831095  396441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:49:40.935727  396441 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:49:40.939068  396441 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:49:40.939087  396441 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:49:40.939096  396441 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 10:49:40.939151  396441 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 10:49:40.939232  396441 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 10:49:40.939303  396441 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts -> hosts in /etc/test/nested/copy/356328
	I1213 10:49:40.939344  396441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/356328
	I1213 10:49:40.947101  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 10:49:40.964732  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts --> /etc/test/nested/copy/356328/hosts (40 bytes)
	I1213 10:49:40.981668  396441 start.go:296] duration metric: took 168.641746ms for postStartSetup
	I1213 10:49:40.981767  396441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:49:40.981804  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:41.001302  396441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:49:41.104610  396441 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:49:41.109266  396441 fix.go:56] duration metric: took 1.875532342s for fixHost
	I1213 10:49:41.109282  396441 start.go:83] releasing machines lock for "functional-407525", held for 1.875571571s
	I1213 10:49:41.109349  396441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-407525
	I1213 10:49:41.125841  396441 ssh_runner.go:195] Run: cat /version.json
	I1213 10:49:41.125888  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:41.126157  396441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:49:41.126214  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:41.148984  396441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:49:41.157093  396441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:49:41.349053  396441 ssh_runner.go:195] Run: systemctl --version
	I1213 10:49:41.355137  396441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:49:41.394464  396441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:49:41.399282  396441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:49:41.399342  396441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:49:41.407074  396441 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:49:41.407089  396441 start.go:496] detecting cgroup driver to use...
	I1213 10:49:41.407118  396441 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:49:41.407177  396441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:49:41.422248  396441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:49:41.434814  396441 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:49:41.434866  396441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:49:41.450404  396441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:49:41.463493  396441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:49:41.587216  396441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:49:41.708085  396441 docker.go:234] disabling docker service ...
	I1213 10:49:41.708178  396441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:49:41.726011  396441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:49:41.739486  396441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:49:41.858015  396441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:49:41.976835  396441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:49:41.990126  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:49:42.004186  396441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:49:42.004281  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.015561  396441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:49:42.015636  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.026721  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.037311  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.047280  396441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:49:42.056517  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.067880  396441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.078430  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.089815  396441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:49:42.100093  396441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:49:42.110006  396441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:49:42.245156  396441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:49:42.438084  396441 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:49:42.438159  396441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:49:42.442010  396441 start.go:564] Will wait 60s for crictl version
	I1213 10:49:42.442064  396441 ssh_runner.go:195] Run: which crictl
	I1213 10:49:42.445629  396441 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:49:42.469110  396441 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:49:42.469189  396441 ssh_runner.go:195] Run: crio --version
	I1213 10:49:42.498052  396441 ssh_runner.go:195] Run: crio --version
	I1213 10:49:42.536633  396441 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 10:49:42.539603  396441 cli_runner.go:164] Run: docker network inspect functional-407525 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:49:42.571469  396441 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:49:42.578474  396441 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 10:49:42.582400  396441 kubeadm.go:884] updating cluster {Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:49:42.582534  396441 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:49:42.582601  396441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:49:42.622515  396441 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:49:42.622526  396441 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:49:42.622581  396441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:49:42.647505  396441 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:49:42.647532  396441 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:49:42.647540  396441 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 10:49:42.647645  396441 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-407525 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:49:42.647723  396441 ssh_runner.go:195] Run: crio config
	I1213 10:49:42.707356  396441 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 10:49:42.707414  396441 cni.go:84] Creating CNI manager for ""
	I1213 10:49:42.707422  396441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:49:42.707430  396441 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:49:42.707452  396441 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-407525 NodeName:functional-407525 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:49:42.707613  396441 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-407525"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:49:42.707687  396441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:49:42.715307  396441 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:49:42.715378  396441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:49:42.722969  396441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 10:49:42.735593  396441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:49:42.747933  396441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1213 10:49:42.760993  396441 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:49:42.765274  396441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:49:42.881089  396441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:49:43.272837  396441 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525 for IP: 192.168.49.2
	I1213 10:49:43.272850  396441 certs.go:195] generating shared ca certs ...
	I1213 10:49:43.272866  396441 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:49:43.273008  396441 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 10:49:43.273053  396441 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 10:49:43.273060  396441 certs.go:257] generating profile certs ...
	I1213 10:49:43.273166  396441 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key
	I1213 10:49:43.273224  396441 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key.2185ee04
	I1213 10:49:43.273264  396441 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key
	I1213 10:49:43.273384  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 10:49:43.273414  396441 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 10:49:43.273421  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:49:43.273447  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:49:43.273476  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:49:43.273501  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 10:49:43.273543  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 10:49:43.274189  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:49:43.293217  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:49:43.313563  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:49:43.332800  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:49:43.356461  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:49:43.375598  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:49:43.393764  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:49:43.411407  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 10:49:43.429560  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 10:49:43.447014  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:49:43.465017  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 10:49:43.483101  396441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:49:43.496527  396441 ssh_runner.go:195] Run: openssl version
	I1213 10:49:43.502994  396441 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 10:49:43.510763  396441 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 10:49:43.518540  396441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 10:49:43.522603  396441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 10:49:43.522661  396441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 10:49:43.566464  396441 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:49:43.574093  396441 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:49:43.581656  396441 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:49:43.589363  396441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:49:43.593193  396441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:49:43.593258  396441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:49:43.634480  396441 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:49:43.641940  396441 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 10:49:43.649200  396441 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 10:49:43.656832  396441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 10:49:43.660735  396441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 10:49:43.660790  396441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 10:49:43.706761  396441 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:49:43.714203  396441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:49:43.718007  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:49:43.761049  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:49:43.803978  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:49:43.847848  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:49:43.889404  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:49:43.931127  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:49:43.975457  396441 kubeadm.go:401] StartCluster: {Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:49:43.975563  396441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:49:43.975628  396441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:49:44.005477  396441 cri.go:89] found id: ""
	I1213 10:49:44.005555  396441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:49:44.016406  396441 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:49:44.016416  396441 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:49:44.016469  396441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:49:44.028094  396441 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:49:44.028621  396441 kubeconfig.go:125] found "functional-407525" server: "https://192.168.49.2:8441"
	I1213 10:49:44.029882  396441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:49:44.039549  396441 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 10:35:07.660360228 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 10:49:42.756829139 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 10:49:44.039559  396441 kubeadm.go:1161] stopping kube-system containers ...
	I1213 10:49:44.039569  396441 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 10:49:44.039622  396441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:49:44.076693  396441 cri.go:89] found id: ""
	I1213 10:49:44.076751  396441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 10:49:44.096721  396441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:49:44.104663  396441 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 13 10:39 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 10:39 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 13 10:39 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 10:39 /etc/kubernetes/scheduler.conf
	
	I1213 10:49:44.104731  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:49:44.112473  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:49:44.119938  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:49:44.119996  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:49:44.127386  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:49:44.135062  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:49:44.135113  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:49:44.142352  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:49:44.150087  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:49:44.150140  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:49:44.157689  396441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:49:44.166075  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:49:44.211012  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:49:46.340316  396441 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.129279793s)
	I1213 10:49:46.340374  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:49:46.548065  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:49:46.621630  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:49:46.676051  396441 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:49:46.676117  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:47.176335  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:47.676600  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:48.176220  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:48.676514  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:49.177109  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:49.677029  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:50.176294  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:50.676405  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:51.176207  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:51.677115  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:52.176309  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:52.676843  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:53.176518  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:53.677139  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:54.176272  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:54.677116  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:55.176949  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:55.677027  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:56.176855  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:56.677287  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:57.176985  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:57.676291  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:58.176321  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:58.676311  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:59.177074  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:59.676498  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:00.177244  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:00.676377  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:01.176944  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:01.676370  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:02.176565  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:02.676374  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:03.176325  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:03.677205  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:04.177202  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:04.676995  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:05.176541  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:05.676768  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:06.176328  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:06.676318  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:07.176298  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:07.676607  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:08.176977  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:08.676972  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:09.176754  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:09.676315  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:10.176824  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:10.676204  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:11.177281  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:11.676341  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:12.176307  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:12.677058  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:13.176868  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:13.676294  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:14.176196  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:14.676345  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:15.176220  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:15.676507  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:16.177216  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:16.676814  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:17.177128  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:17.676923  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:18.177103  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:18.677241  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:19.176631  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:19.676250  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:20.177039  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:20.676330  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:21.176991  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:21.676979  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:22.176310  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:22.676330  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:23.177072  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:23.676322  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:24.177240  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:24.676323  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:25.176911  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:25.677053  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:26.176471  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:26.676452  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:27.177028  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:27.676317  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:28.176975  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:28.676338  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:29.176379  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:29.676600  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:30.176351  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:30.676375  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:31.177240  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:31.677058  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:32.176843  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:32.676436  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:33.176344  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:33.677269  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:34.176296  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:34.676316  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:35.176823  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:35.676192  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:36.177128  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:36.677155  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:37.176402  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:37.676320  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:38.176310  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:38.677003  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:39.176915  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:39.676966  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:40.176371  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:40.676264  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:41.176771  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:41.676461  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:42.176264  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:42.676335  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:43.177015  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:43.676312  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:44.176383  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:44.676333  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:45.176214  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:45.676348  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:46.177104  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:46.676677  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:50:46.676771  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:50:46.701985  396441 cri.go:89] found id: ""
	I1213 10:50:46.701999  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.702006  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:50:46.702011  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:50:46.702065  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:50:46.727261  396441 cri.go:89] found id: ""
	I1213 10:50:46.727275  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.727282  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:50:46.727287  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:50:46.727352  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:50:46.756930  396441 cri.go:89] found id: ""
	I1213 10:50:46.756944  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.756952  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:50:46.756957  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:50:46.757025  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:50:46.788731  396441 cri.go:89] found id: ""
	I1213 10:50:46.788745  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.788752  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:50:46.788757  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:50:46.788810  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:50:46.816991  396441 cri.go:89] found id: ""
	I1213 10:50:46.817004  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.817012  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:50:46.817017  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:50:46.817072  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:50:46.847482  396441 cri.go:89] found id: ""
	I1213 10:50:46.847498  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.847505  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:50:46.847559  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:50:46.847628  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:50:46.872720  396441 cri.go:89] found id: ""
	I1213 10:50:46.872734  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.872741  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:50:46.872749  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:50:46.872759  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:50:46.942912  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:50:46.942931  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:50:46.971862  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:50:46.971879  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:50:47.038918  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:50:47.038938  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:50:47.053895  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:50:47.053912  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:50:47.119106  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:47.111056   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.111745   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.113325   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.113616   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.115033   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:50:47.111056   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.111745   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.113325   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.113616   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.115033   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:50:49.619370  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:49.629150  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:50:49.629213  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:50:49.658173  396441 cri.go:89] found id: ""
	I1213 10:50:49.658186  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.658194  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:50:49.658199  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:50:49.658256  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:50:49.683401  396441 cri.go:89] found id: ""
	I1213 10:50:49.683414  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.683422  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:50:49.683427  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:50:49.683484  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:50:49.708416  396441 cri.go:89] found id: ""
	I1213 10:50:49.708440  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.708448  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:50:49.708454  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:50:49.708520  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:50:49.737305  396441 cri.go:89] found id: ""
	I1213 10:50:49.737319  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.737326  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:50:49.737331  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:50:49.737385  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:50:49.761415  396441 cri.go:89] found id: ""
	I1213 10:50:49.761431  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.761438  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:50:49.761443  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:50:49.761496  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:50:49.805122  396441 cri.go:89] found id: ""
	I1213 10:50:49.805135  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.805142  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:50:49.805147  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:50:49.805205  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:50:49.846981  396441 cri.go:89] found id: ""
	I1213 10:50:49.846995  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.847002  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:50:49.847010  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:50:49.847020  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:50:49.918064  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:50:49.918084  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:50:49.947649  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:50:49.947666  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:50:50.012059  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:50:50.012084  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:50:50.028985  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:50:50.029010  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:50:50.098147  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:50.089035   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.089498   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.091615   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.092842   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.093753   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:50:50.089035   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.089498   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.091615   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.092842   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.093753   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:50:52.599845  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:52.610036  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:50:52.610095  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:50:52.638582  396441 cri.go:89] found id: ""
	I1213 10:50:52.638597  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.638603  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:50:52.638608  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:50:52.638670  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:50:52.663295  396441 cri.go:89] found id: ""
	I1213 10:50:52.663308  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.663315  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:50:52.663320  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:50:52.663375  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:50:52.689168  396441 cri.go:89] found id: ""
	I1213 10:50:52.689182  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.689189  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:50:52.689194  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:50:52.689253  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:50:52.714589  396441 cri.go:89] found id: ""
	I1213 10:50:52.714602  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.714610  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:50:52.714615  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:50:52.714669  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:50:52.742324  396441 cri.go:89] found id: ""
	I1213 10:50:52.742338  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.742345  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:50:52.742363  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:50:52.742420  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:50:52.778053  396441 cri.go:89] found id: ""
	I1213 10:50:52.778067  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.778074  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:50:52.778079  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:50:52.778138  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:50:52.805632  396441 cri.go:89] found id: ""
	I1213 10:50:52.805646  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.805653  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:50:52.805661  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:50:52.805671  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:50:52.875461  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:50:52.875481  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:50:52.890245  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:50:52.890261  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:50:52.957587  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:52.949597   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.950157   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.951730   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.952367   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.953817   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:50:52.949597   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.950157   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.951730   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.952367   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.953817   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:50:52.957599  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:50:52.957612  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:50:53.025361  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:50:53.025388  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:50:55.556570  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:55.566463  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:50:55.566537  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:50:55.593903  396441 cri.go:89] found id: ""
	I1213 10:50:55.593917  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.593924  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:50:55.593929  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:50:55.593992  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:50:55.619079  396441 cri.go:89] found id: ""
	I1213 10:50:55.619093  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.619101  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:50:55.619106  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:50:55.619162  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:50:55.645916  396441 cri.go:89] found id: ""
	I1213 10:50:55.645931  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.645938  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:50:55.645943  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:50:55.646012  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:50:55.671377  396441 cri.go:89] found id: ""
	I1213 10:50:55.671397  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.671405  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:50:55.671410  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:50:55.671469  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:50:55.697872  396441 cri.go:89] found id: ""
	I1213 10:50:55.697886  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.697894  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:50:55.697917  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:50:55.697976  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:50:55.723576  396441 cri.go:89] found id: ""
	I1213 10:50:55.723589  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.723597  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:50:55.723602  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:50:55.723655  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:50:55.751256  396441 cri.go:89] found id: ""
	I1213 10:50:55.751270  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.751277  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:50:55.751286  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:50:55.751296  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:50:55.821963  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:50:55.821982  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:50:55.836343  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:50:55.836357  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:50:55.903582  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:55.892408   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.895596   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.897286   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.897780   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.899369   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:50:55.892408   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.895596   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.897286   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.897780   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.899369   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:50:55.903594  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:50:55.903605  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:50:55.975012  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:50:55.975037  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:50:58.506699  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:58.517103  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:50:58.517162  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:50:58.542695  396441 cri.go:89] found id: ""
	I1213 10:50:58.542717  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.542725  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:50:58.542730  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:50:58.542787  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:50:58.574075  396441 cri.go:89] found id: ""
	I1213 10:50:58.574089  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.574096  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:50:58.574101  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:50:58.574161  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:50:58.602982  396441 cri.go:89] found id: ""
	I1213 10:50:58.602997  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.603003  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:50:58.603008  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:50:58.603066  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:50:58.628158  396441 cri.go:89] found id: ""
	I1213 10:50:58.628172  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.628179  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:50:58.628185  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:50:58.628241  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:50:58.653050  396441 cri.go:89] found id: ""
	I1213 10:50:58.653064  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.653071  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:50:58.653076  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:50:58.653133  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:50:58.678853  396441 cri.go:89] found id: ""
	I1213 10:50:58.678867  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.678875  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:50:58.678880  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:50:58.678938  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:50:58.704667  396441 cri.go:89] found id: ""
	I1213 10:50:58.704681  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.704689  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:50:58.704696  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:50:58.704706  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:50:58.769708  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:50:58.769731  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:50:58.786197  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:50:58.786214  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:50:58.859562  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:58.850377   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.851009   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.852748   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.853294   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.854974   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:50:58.850377   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.851009   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.852748   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.853294   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.854974   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:50:58.859572  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:50:58.859583  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:50:58.929132  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:50:58.929151  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:01.457488  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:01.467675  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:01.467734  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:01.494648  396441 cri.go:89] found id: ""
	I1213 10:51:01.494662  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.494669  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:01.494675  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:01.494735  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:01.524042  396441 cri.go:89] found id: ""
	I1213 10:51:01.524056  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.524062  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:01.524068  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:01.524130  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:01.550111  396441 cri.go:89] found id: ""
	I1213 10:51:01.550126  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.550133  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:01.550139  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:01.550207  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:01.579191  396441 cri.go:89] found id: ""
	I1213 10:51:01.579205  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.579213  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:01.579218  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:01.579274  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:01.606365  396441 cri.go:89] found id: ""
	I1213 10:51:01.606379  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.606387  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:01.606393  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:01.606456  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:01.632570  396441 cri.go:89] found id: ""
	I1213 10:51:01.632584  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.632593  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:01.632598  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:01.632659  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:01.659645  396441 cri.go:89] found id: ""
	I1213 10:51:01.659663  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.659671  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:01.659683  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:01.659694  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:01.689331  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:01.689348  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:01.754743  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:01.754766  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:01.772787  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:01.772804  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:01.858533  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:01.849677   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.850584   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.852497   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.852896   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.854393   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:01.849677   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.850584   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.852497   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.852896   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.854393   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:01.858545  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:01.858555  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:04.427384  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:04.437715  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:04.437777  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:04.463479  396441 cri.go:89] found id: ""
	I1213 10:51:04.463494  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.463501  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:04.463521  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:04.463580  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:04.491057  396441 cri.go:89] found id: ""
	I1213 10:51:04.491072  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.491079  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:04.491084  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:04.491142  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:04.518458  396441 cri.go:89] found id: ""
	I1213 10:51:04.518471  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.518478  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:04.518483  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:04.518558  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:04.544830  396441 cri.go:89] found id: ""
	I1213 10:51:04.544844  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.544852  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:04.544857  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:04.544915  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:04.571154  396441 cri.go:89] found id: ""
	I1213 10:51:04.571168  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.571177  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:04.571182  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:04.571241  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:04.596261  396441 cri.go:89] found id: ""
	I1213 10:51:04.596275  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.596283  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:04.596288  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:04.596344  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:04.625558  396441 cri.go:89] found id: ""
	I1213 10:51:04.625572  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.625580  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:04.625587  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:04.625598  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:04.656944  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:04.656961  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:04.722740  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:04.722759  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:04.738031  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:04.738051  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:04.817645  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:04.809246   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.810150   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.811791   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.812158   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.813687   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:04.809246   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.810150   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.811791   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.812158   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.813687   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:04.817655  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:04.817669  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:07.391199  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:07.401600  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:07.401657  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:07.427331  396441 cri.go:89] found id: ""
	I1213 10:51:07.427346  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.427353  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:07.427358  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:07.427417  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:07.452053  396441 cri.go:89] found id: ""
	I1213 10:51:07.452067  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.452074  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:07.452079  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:07.452134  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:07.477750  396441 cri.go:89] found id: ""
	I1213 10:51:07.477764  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.477772  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:07.477777  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:07.477836  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:07.506642  396441 cri.go:89] found id: ""
	I1213 10:51:07.506657  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.506664  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:07.506669  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:07.506727  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:07.533730  396441 cri.go:89] found id: ""
	I1213 10:51:07.533744  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.533751  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:07.533757  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:07.533815  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:07.561505  396441 cri.go:89] found id: ""
	I1213 10:51:07.561521  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.561528  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:07.561534  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:07.561587  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:07.586129  396441 cri.go:89] found id: ""
	I1213 10:51:07.586142  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.586149  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:07.586157  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:07.586167  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:07.601150  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:07.601167  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:07.664624  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:07.656633   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.657400   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.659023   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.659321   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.660870   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:07.656633   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.657400   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.659023   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.659321   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.660870   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:07.664636  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:07.664649  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:07.733213  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:07.733233  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:07.762844  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:07.762860  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:10.334136  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:10.344504  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:10.344575  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:10.369562  396441 cri.go:89] found id: ""
	I1213 10:51:10.369575  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.369582  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:10.369587  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:10.369652  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:10.399083  396441 cri.go:89] found id: ""
	I1213 10:51:10.399097  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.399104  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:10.399110  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:10.399166  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:10.425761  396441 cri.go:89] found id: ""
	I1213 10:51:10.425786  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.425794  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:10.425799  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:10.425863  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:10.452658  396441 cri.go:89] found id: ""
	I1213 10:51:10.452672  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.452679  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:10.452685  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:10.452741  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:10.477286  396441 cri.go:89] found id: ""
	I1213 10:51:10.477300  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.477308  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:10.477313  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:10.477375  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:10.502400  396441 cri.go:89] found id: ""
	I1213 10:51:10.502414  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.502421  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:10.502427  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:10.502483  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:10.527113  396441 cri.go:89] found id: ""
	I1213 10:51:10.527127  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.527134  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:10.527142  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:10.527152  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:10.558574  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:10.558590  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:10.623165  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:10.623185  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:10.637513  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:10.637528  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:10.700566  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:10.691507   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.692166   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.694005   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.694639   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.696341   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:10.691507   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.692166   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.694005   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.694639   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.696341   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:10.700576  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:10.700586  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:13.275221  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:13.285371  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:13.285427  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:13.310677  396441 cri.go:89] found id: ""
	I1213 10:51:13.310691  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.310699  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:13.310704  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:13.310766  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:13.339471  396441 cri.go:89] found id: ""
	I1213 10:51:13.339485  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.339493  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:13.339498  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:13.339572  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:13.363772  396441 cri.go:89] found id: ""
	I1213 10:51:13.363787  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.363794  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:13.363799  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:13.363854  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:13.389059  396441 cri.go:89] found id: ""
	I1213 10:51:13.389073  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.389080  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:13.389085  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:13.389140  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:13.414845  396441 cri.go:89] found id: ""
	I1213 10:51:13.414859  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.414866  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:13.414871  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:13.414926  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:13.444040  396441 cri.go:89] found id: ""
	I1213 10:51:13.444054  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.444061  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:13.444066  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:13.444122  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:13.472753  396441 cri.go:89] found id: ""
	I1213 10:51:13.472769  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.472779  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:13.472791  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:13.472806  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:13.487326  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:13.487342  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:13.553218  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:13.543359   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.545061   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.545543   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.547693   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.548343   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:13.543359   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.545061   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.545543   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.547693   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.548343   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:13.553229  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:13.553239  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:13.623642  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:13.623662  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:13.652820  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:13.652836  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:16.219667  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:16.229714  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:16.229774  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:16.256550  396441 cri.go:89] found id: ""
	I1213 10:51:16.256564  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.256571  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:16.256576  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:16.256638  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:16.281266  396441 cri.go:89] found id: ""
	I1213 10:51:16.281280  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.281286  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:16.281292  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:16.281347  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:16.313494  396441 cri.go:89] found id: ""
	I1213 10:51:16.313509  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.313517  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:16.313522  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:16.313580  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:16.338750  396441 cri.go:89] found id: ""
	I1213 10:51:16.338775  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.338783  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:16.338788  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:16.338852  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:16.363883  396441 cri.go:89] found id: ""
	I1213 10:51:16.363898  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.363905  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:16.363910  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:16.363980  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:16.390029  396441 cri.go:89] found id: ""
	I1213 10:51:16.390053  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.390060  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:16.390066  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:16.390123  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:16.415617  396441 cri.go:89] found id: ""
	I1213 10:51:16.415630  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.415637  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:16.415645  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:16.415660  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:16.430631  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:16.430647  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:16.492590  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:16.484588   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.485123   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.486621   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.487162   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.488621   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:16.484588   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.485123   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.486621   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.487162   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.488621   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:16.492603  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:16.492613  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:16.561556  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:16.561578  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:16.589545  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:16.589561  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:19.159792  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:19.170596  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:19.170661  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:19.198953  396441 cri.go:89] found id: ""
	I1213 10:51:19.198967  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.198974  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:19.198979  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:19.199036  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:19.225113  396441 cri.go:89] found id: ""
	I1213 10:51:19.225128  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.225135  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:19.225140  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:19.225195  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:19.250894  396441 cri.go:89] found id: ""
	I1213 10:51:19.250908  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.250916  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:19.250921  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:19.250975  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:19.277076  396441 cri.go:89] found id: ""
	I1213 10:51:19.277091  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.277098  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:19.277103  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:19.277164  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:19.304480  396441 cri.go:89] found id: ""
	I1213 10:51:19.304495  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.304502  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:19.304507  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:19.304567  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:19.330126  396441 cri.go:89] found id: ""
	I1213 10:51:19.330140  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.330147  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:19.330152  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:19.330214  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:19.355882  396441 cri.go:89] found id: ""
	I1213 10:51:19.355896  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.355904  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:19.355912  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:19.355922  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:19.423413  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:19.423435  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:19.457267  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:19.457283  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:19.523500  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:19.523525  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:19.538313  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:19.538329  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:19.607695  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:19.594247   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.594872   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.601540   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.602226   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.603277   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:19.594247   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.594872   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.601540   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.602226   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.603277   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:22.108783  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:22.118887  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:22.118946  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:22.146848  396441 cri.go:89] found id: ""
	I1213 10:51:22.146863  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.146870  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:22.146875  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:22.146929  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:22.173022  396441 cri.go:89] found id: ""
	I1213 10:51:22.173036  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.173049  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:22.173055  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:22.173110  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:22.197674  396441 cri.go:89] found id: ""
	I1213 10:51:22.197687  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.197695  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:22.197700  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:22.197757  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:22.225539  396441 cri.go:89] found id: ""
	I1213 10:51:22.225553  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.225560  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:22.225565  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:22.225624  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:22.253269  396441 cri.go:89] found id: ""
	I1213 10:51:22.253282  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.253290  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:22.253294  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:22.253355  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:22.279157  396441 cri.go:89] found id: ""
	I1213 10:51:22.279172  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.279179  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:22.279184  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:22.279238  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:22.308952  396441 cri.go:89] found id: ""
	I1213 10:51:22.308965  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.308972  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:22.308979  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:22.309000  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:22.323813  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:22.323828  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:22.388544  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:22.379305   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.380377   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.381133   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.382647   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.382971   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:22.379305   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.380377   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.381133   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.382647   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.382971   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:22.388554  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:22.388565  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:22.456639  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:22.456659  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:22.485416  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:22.485432  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:25.052020  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:25.063916  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:25.063975  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:25.100470  396441 cri.go:89] found id: ""
	I1213 10:51:25.100484  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.100492  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:25.100498  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:25.100559  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:25.128317  396441 cri.go:89] found id: ""
	I1213 10:51:25.128331  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.128339  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:25.128344  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:25.128399  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:25.159302  396441 cri.go:89] found id: ""
	I1213 10:51:25.159316  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.159323  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:25.159328  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:25.159386  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:25.186563  396441 cri.go:89] found id: ""
	I1213 10:51:25.186577  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.186591  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:25.186597  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:25.186656  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:25.212652  396441 cri.go:89] found id: ""
	I1213 10:51:25.212666  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.212673  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:25.212678  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:25.212738  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:25.238215  396441 cri.go:89] found id: ""
	I1213 10:51:25.238229  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.238236  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:25.238242  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:25.238314  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:25.264506  396441 cri.go:89] found id: ""
	I1213 10:51:25.264519  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.264526  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:25.264533  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:25.264544  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:25.293035  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:25.293052  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:25.358428  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:25.358448  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:25.373611  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:25.373627  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:25.438267  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:25.430001   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.430492   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.432042   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.432482   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.433912   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:25.430001   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.430492   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.432042   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.432482   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.433912   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:25.438277  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:25.438288  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:28.007912  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:28.020840  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:28.020914  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:28.054985  396441 cri.go:89] found id: ""
	I1213 10:51:28.054999  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.055007  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:28.055012  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:28.055076  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:28.086101  396441 cri.go:89] found id: ""
	I1213 10:51:28.086116  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.086123  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:28.086128  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:28.086184  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:28.114710  396441 cri.go:89] found id: ""
	I1213 10:51:28.114725  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.114732  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:28.114737  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:28.114796  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:28.141803  396441 cri.go:89] found id: ""
	I1213 10:51:28.141817  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.141825  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:28.141831  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:28.141891  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:28.176974  396441 cri.go:89] found id: ""
	I1213 10:51:28.176989  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.176997  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:28.177002  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:28.177063  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:28.202686  396441 cri.go:89] found id: ""
	I1213 10:51:28.202700  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.202707  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:28.202712  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:28.202777  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:28.229573  396441 cri.go:89] found id: ""
	I1213 10:51:28.229587  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.229595  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:28.229604  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:28.229617  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:28.245053  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:28.245070  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:28.314477  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:28.305602   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.306469   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.307980   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.308612   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.310284   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:28.305602   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.306469   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.307980   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.308612   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.310284   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:28.314487  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:28.314513  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:28.382755  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:28.382775  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:28.411608  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:28.411626  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:30.977998  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:30.988313  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:30.988371  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:31.017637  396441 cri.go:89] found id: ""
	I1213 10:51:31.017652  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.017659  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:31.017664  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:31.017739  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:31.051049  396441 cri.go:89] found id: ""
	I1213 10:51:31.051064  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.051071  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:31.051076  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:31.051147  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:31.091994  396441 cri.go:89] found id: ""
	I1213 10:51:31.092012  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.092019  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:31.092025  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:31.092087  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:31.121068  396441 cri.go:89] found id: ""
	I1213 10:51:31.121083  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.121090  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:31.121095  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:31.121154  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:31.148227  396441 cri.go:89] found id: ""
	I1213 10:51:31.148240  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.148248  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:31.148253  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:31.148309  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:31.174904  396441 cri.go:89] found id: ""
	I1213 10:51:31.174919  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.174926  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:31.174932  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:31.174996  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:31.200730  396441 cri.go:89] found id: ""
	I1213 10:51:31.200743  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.200750  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:31.200757  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:31.200768  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:31.215296  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:31.215315  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:31.279266  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:31.270976   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.271649   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.273219   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.273818   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.275412   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:31.270976   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.271649   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.273219   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.273818   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.275412   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:31.279277  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:31.279286  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:31.346253  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:31.346273  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:31.374790  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:31.374805  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:33.942724  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:33.953904  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:33.953965  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:33.979791  396441 cri.go:89] found id: ""
	I1213 10:51:33.979806  396441 logs.go:282] 0 containers: []
	W1213 10:51:33.979813  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:33.979819  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:33.979882  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:34.009113  396441 cri.go:89] found id: ""
	I1213 10:51:34.009129  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.009139  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:34.009145  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:34.009213  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:34.054885  396441 cri.go:89] found id: ""
	I1213 10:51:34.054903  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.054911  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:34.054917  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:34.054978  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:34.087332  396441 cri.go:89] found id: ""
	I1213 10:51:34.087346  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.087354  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:34.087360  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:34.087416  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:34.118541  396441 cri.go:89] found id: ""
	I1213 10:51:34.118556  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.118563  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:34.118568  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:34.118626  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:34.148286  396441 cri.go:89] found id: ""
	I1213 10:51:34.148300  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.148308  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:34.148313  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:34.148368  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:34.174436  396441 cri.go:89] found id: ""
	I1213 10:51:34.174450  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.174457  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:34.174465  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:34.174484  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:34.239233  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:34.239255  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:34.253915  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:34.253932  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:34.319992  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:34.311539   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.312044   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.313591   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.313998   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.315450   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:34.311539   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.312044   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.313591   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.313998   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.315450   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:34.320001  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:34.320011  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:34.387971  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:34.387992  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:36.918587  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:36.930360  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:36.930424  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:36.956712  396441 cri.go:89] found id: ""
	I1213 10:51:36.956726  396441 logs.go:282] 0 containers: []
	W1213 10:51:36.956733  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:36.956738  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:36.956795  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:36.982448  396441 cri.go:89] found id: ""
	I1213 10:51:36.982462  396441 logs.go:282] 0 containers: []
	W1213 10:51:36.982469  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:36.982474  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:36.982541  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:37.014971  396441 cri.go:89] found id: ""
	I1213 10:51:37.014987  396441 logs.go:282] 0 containers: []
	W1213 10:51:37.014994  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:37.015000  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:37.015090  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:37.045960  396441 cri.go:89] found id: ""
	I1213 10:51:37.045974  396441 logs.go:282] 0 containers: []
	W1213 10:51:37.045981  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:37.045987  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:37.046044  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:37.077901  396441 cri.go:89] found id: ""
	I1213 10:51:37.077915  396441 logs.go:282] 0 containers: []
	W1213 10:51:37.077933  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:37.077938  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:37.077995  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:37.105187  396441 cri.go:89] found id: ""
	I1213 10:51:37.105207  396441 logs.go:282] 0 containers: []
	W1213 10:51:37.105214  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:37.105220  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:37.105275  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:37.134077  396441 cri.go:89] found id: ""
	I1213 10:51:37.134102  396441 logs.go:282] 0 containers: []
	W1213 10:51:37.134110  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:37.134118  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:37.134129  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:37.199336  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:37.199355  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:37.213787  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:37.213808  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:37.282802  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:37.274301   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.275006   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.276647   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.277214   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.278711   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:37.274301   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.275006   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.276647   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.277214   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.278711   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:37.282817  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:37.282827  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:37.352930  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:37.352958  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:39.888029  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:39.898120  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:39.898197  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:39.925423  396441 cri.go:89] found id: ""
	I1213 10:51:39.925437  396441 logs.go:282] 0 containers: []
	W1213 10:51:39.925444  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:39.925450  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:39.925510  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:39.951432  396441 cri.go:89] found id: ""
	I1213 10:51:39.951446  396441 logs.go:282] 0 containers: []
	W1213 10:51:39.951454  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:39.951459  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:39.951547  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:39.977216  396441 cri.go:89] found id: ""
	I1213 10:51:39.977231  396441 logs.go:282] 0 containers: []
	W1213 10:51:39.977238  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:39.977244  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:39.977298  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:40.019791  396441 cri.go:89] found id: ""
	I1213 10:51:40.019808  396441 logs.go:282] 0 containers: []
	W1213 10:51:40.019816  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:40.019823  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:40.019900  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:40.051826  396441 cri.go:89] found id: ""
	I1213 10:51:40.051840  396441 logs.go:282] 0 containers: []
	W1213 10:51:40.051847  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:40.051853  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:40.051928  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:40.091165  396441 cri.go:89] found id: ""
	I1213 10:51:40.091192  396441 logs.go:282] 0 containers: []
	W1213 10:51:40.091200  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:40.091206  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:40.091272  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:40.122957  396441 cri.go:89] found id: ""
	I1213 10:51:40.122972  396441 logs.go:282] 0 containers: []
	W1213 10:51:40.122979  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:40.122986  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:40.122998  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:40.186192  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:40.177419   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.178220   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.179932   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.180506   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.182150   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:40.177419   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.178220   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.179932   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.180506   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.182150   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:40.186204  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:40.186214  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:40.252986  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:40.253005  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:40.283019  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:40.283042  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:40.347489  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:40.347521  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:42.863361  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:42.874757  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:42.874824  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:42.899348  396441 cri.go:89] found id: ""
	I1213 10:51:42.899362  396441 logs.go:282] 0 containers: []
	W1213 10:51:42.899370  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:42.899375  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:42.899440  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:42.925079  396441 cri.go:89] found id: ""
	I1213 10:51:42.925092  396441 logs.go:282] 0 containers: []
	W1213 10:51:42.925100  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:42.925105  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:42.925165  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:42.951388  396441 cri.go:89] found id: ""
	I1213 10:51:42.951403  396441 logs.go:282] 0 containers: []
	W1213 10:51:42.951410  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:42.951415  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:42.951470  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:42.977668  396441 cri.go:89] found id: ""
	I1213 10:51:42.977682  396441 logs.go:282] 0 containers: []
	W1213 10:51:42.977688  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:42.977694  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:42.977748  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:43.002136  396441 cri.go:89] found id: ""
	I1213 10:51:43.002150  396441 logs.go:282] 0 containers: []
	W1213 10:51:43.002157  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:43.002162  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:43.002219  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:43.038950  396441 cri.go:89] found id: ""
	I1213 10:51:43.038963  396441 logs.go:282] 0 containers: []
	W1213 10:51:43.038971  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:43.038976  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:43.039033  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:43.071573  396441 cri.go:89] found id: ""
	I1213 10:51:43.071588  396441 logs.go:282] 0 containers: []
	W1213 10:51:43.071595  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:43.071602  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:43.071615  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:43.141998  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:43.142019  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:43.157258  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:43.157274  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:43.224710  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:43.216651   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.217035   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.218535   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.218962   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.220859   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:43.216651   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.217035   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.218535   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.218962   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.220859   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:43.224720  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:43.224731  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:43.294968  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:43.294988  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:45.825007  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:45.835672  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:45.835743  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:45.861353  396441 cri.go:89] found id: ""
	I1213 10:51:45.861375  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.861382  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:45.861388  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:45.861452  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:45.888508  396441 cri.go:89] found id: ""
	I1213 10:51:45.888522  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.888530  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:45.888534  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:45.888594  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:45.915026  396441 cri.go:89] found id: ""
	I1213 10:51:45.915040  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.915049  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:45.915054  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:45.915108  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:45.940299  396441 cri.go:89] found id: ""
	I1213 10:51:45.940313  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.940320  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:45.940325  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:45.940382  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:45.965643  396441 cri.go:89] found id: ""
	I1213 10:51:45.965657  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.965664  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:45.965669  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:45.965722  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:45.992269  396441 cri.go:89] found id: ""
	I1213 10:51:45.992283  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.992290  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:45.992295  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:45.992354  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:46.024907  396441 cri.go:89] found id: ""
	I1213 10:51:46.024922  396441 logs.go:282] 0 containers: []
	W1213 10:51:46.024941  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:46.024950  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:46.024980  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:46.072645  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:46.072664  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:46.144539  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:46.144569  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:46.160047  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:46.160063  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:46.224857  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:46.216357   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.217032   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.218768   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.219308   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.220994   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:46.216357   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.217032   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.218768   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.219308   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.220994   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:46.224867  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:46.224878  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:48.792536  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:48.802577  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:48.802642  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:48.826706  396441 cri.go:89] found id: ""
	I1213 10:51:48.826720  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.826727  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:48.826733  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:48.826787  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:48.851205  396441 cri.go:89] found id: ""
	I1213 10:51:48.851219  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.851226  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:48.851232  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:48.851286  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:48.875646  396441 cri.go:89] found id: ""
	I1213 10:51:48.875661  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.875669  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:48.875674  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:48.875742  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:48.902019  396441 cri.go:89] found id: ""
	I1213 10:51:48.902033  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.902041  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:48.902046  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:48.902102  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:48.926529  396441 cri.go:89] found id: ""
	I1213 10:51:48.926543  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.926550  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:48.926555  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:48.926610  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:48.952549  396441 cri.go:89] found id: ""
	I1213 10:51:48.952563  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.952570  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:48.952576  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:48.952637  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:48.977178  396441 cri.go:89] found id: ""
	I1213 10:51:48.977191  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.977198  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:48.977206  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:48.977218  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:49.044123  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:49.044147  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:49.066217  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:49.066239  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:49.145635  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:49.136657   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.137144   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.139046   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.139577   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.141421   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:49.136657   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.137144   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.139046   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.139577   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.141421   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:49.145645  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:49.145655  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:49.212965  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:49.212984  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:51.744115  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:51.755896  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:51.755984  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:51.790945  396441 cri.go:89] found id: ""
	I1213 10:51:51.790958  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.790965  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:51.790970  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:51.791024  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:51.816688  396441 cri.go:89] found id: ""
	I1213 10:51:51.816702  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.816709  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:51.816715  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:51.816782  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:51.841873  396441 cri.go:89] found id: ""
	I1213 10:51:51.841886  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.841893  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:51.841898  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:51.841955  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:51.867108  396441 cri.go:89] found id: ""
	I1213 10:51:51.867121  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.867129  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:51.867134  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:51.867187  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:51.892370  396441 cri.go:89] found id: ""
	I1213 10:51:51.892383  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.892390  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:51.892395  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:51.892453  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:51.923043  396441 cri.go:89] found id: ""
	I1213 10:51:51.923057  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.923064  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:51.923069  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:51.923159  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:51.948869  396441 cri.go:89] found id: ""
	I1213 10:51:51.948882  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.948889  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:51.948897  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:51.948926  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:52.018383  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:52.006286   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.007111   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.008967   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.009594   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.011259   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:52.006286   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.007111   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.008967   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.009594   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.011259   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:52.018405  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:52.018422  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:52.099342  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:52.099363  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:52.136780  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:52.136795  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:52.202388  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:52.202408  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:54.716950  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:54.726860  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:54.726918  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:54.751377  396441 cri.go:89] found id: ""
	I1213 10:51:54.751389  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.751396  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:54.751401  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:54.751460  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:54.776769  396441 cri.go:89] found id: ""
	I1213 10:51:54.776782  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.776801  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:54.776806  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:54.776871  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:54.806646  396441 cri.go:89] found id: ""
	I1213 10:51:54.806659  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.806666  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:54.806671  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:54.806727  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:54.834243  396441 cri.go:89] found id: ""
	I1213 10:51:54.834256  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.834264  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:54.834269  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:54.834322  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:54.859938  396441 cri.go:89] found id: ""
	I1213 10:51:54.859958  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.859965  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:54.859970  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:54.860025  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:54.886545  396441 cri.go:89] found id: ""
	I1213 10:51:54.886559  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.886565  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:54.886571  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:54.886633  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:54.911784  396441 cri.go:89] found id: ""
	I1213 10:51:54.911798  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.911805  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:54.911812  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:54.911828  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:54.973210  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:54.965415   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.965956   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.967424   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.968013   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.969442   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:54.965415   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.965956   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.967424   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.968013   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.969442   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:54.973220  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:54.973230  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:55.051411  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:55.051430  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:55.085480  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:55.085497  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:55.151220  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:55.151241  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:57.666660  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:57.676624  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:57.676689  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:57.702082  396441 cri.go:89] found id: ""
	I1213 10:51:57.702095  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.702103  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:57.702108  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:57.702171  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:57.727577  396441 cri.go:89] found id: ""
	I1213 10:51:57.727591  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.727598  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:57.727603  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:57.727657  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:57.752756  396441 cri.go:89] found id: ""
	I1213 10:51:57.752770  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.752777  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:57.752782  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:57.752846  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:57.778022  396441 cri.go:89] found id: ""
	I1213 10:51:57.778036  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.778043  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:57.778048  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:57.778108  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:57.803300  396441 cri.go:89] found id: ""
	I1213 10:51:57.803314  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.803321  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:57.803326  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:57.803385  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:57.828374  396441 cri.go:89] found id: ""
	I1213 10:51:57.828389  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.828396  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:57.828402  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:57.828457  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:57.854910  396441 cri.go:89] found id: ""
	I1213 10:51:57.854925  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.854947  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:57.854955  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:57.854965  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:57.919106  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:57.919126  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:57.933832  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:57.933847  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:58.000903  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:57.992995   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.993480   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.994938   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.995239   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.996659   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:57.992995   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.993480   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.994938   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.995239   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.996659   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:58.000914  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:58.000925  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:58.077434  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:58.077453  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:00.612878  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:00.623959  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:00.624026  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:00.653620  396441 cri.go:89] found id: ""
	I1213 10:52:00.653635  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.653642  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:00.653647  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:00.653705  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:00.679802  396441 cri.go:89] found id: ""
	I1213 10:52:00.679818  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.679825  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:00.679830  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:00.679890  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:00.706677  396441 cri.go:89] found id: ""
	I1213 10:52:00.706691  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.706698  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:00.706703  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:00.706759  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:00.734612  396441 cri.go:89] found id: ""
	I1213 10:52:00.734627  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.734634  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:00.734640  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:00.734697  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:00.761763  396441 cri.go:89] found id: ""
	I1213 10:52:00.761777  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.761784  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:00.761790  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:00.761846  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:00.790057  396441 cri.go:89] found id: ""
	I1213 10:52:00.790071  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.790078  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:00.790083  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:00.790140  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:00.816353  396441 cri.go:89] found id: ""
	I1213 10:52:00.816367  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.816374  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:00.816381  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:00.816391  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:00.881315  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:00.881335  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:00.896220  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:00.896239  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:00.961380  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:00.953176   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.953559   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.955115   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.955439   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.957035   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:00.953176   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.953559   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.955115   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.955439   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.957035   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:00.961391  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:00.961401  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:01.031353  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:01.031373  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:03.565879  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:03.575985  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:03.576043  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:03.605780  396441 cri.go:89] found id: ""
	I1213 10:52:03.605794  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.605801  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:03.605807  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:03.605864  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:03.630990  396441 cri.go:89] found id: ""
	I1213 10:52:03.631006  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.631013  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:03.631018  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:03.631073  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:03.658564  396441 cri.go:89] found id: ""
	I1213 10:52:03.658578  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.658585  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:03.658590  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:03.658645  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:03.689093  396441 cri.go:89] found id: ""
	I1213 10:52:03.689108  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.689116  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:03.689121  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:03.689179  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:03.714786  396441 cri.go:89] found id: ""
	I1213 10:52:03.714800  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.714807  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:03.714812  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:03.714870  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:03.741755  396441 cri.go:89] found id: ""
	I1213 10:52:03.741769  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.741777  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:03.741783  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:03.741841  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:03.771487  396441 cri.go:89] found id: ""
	I1213 10:52:03.771502  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.771509  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:03.771538  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:03.771548  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:03.800650  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:03.800666  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:03.866429  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:03.866448  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:03.882243  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:03.882260  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:03.951157  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:03.941996   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.942648   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.944288   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.944871   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.946634   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:03.941996   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.942648   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.944288   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.944871   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.946634   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:03.951167  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:03.951190  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:06.522609  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:06.532880  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:06.532944  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:06.557937  396441 cri.go:89] found id: ""
	I1213 10:52:06.557952  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.557959  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:06.557965  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:06.558020  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:06.588572  396441 cri.go:89] found id: ""
	I1213 10:52:06.588586  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.588595  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:06.588600  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:06.588660  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:06.614455  396441 cri.go:89] found id: ""
	I1213 10:52:06.614468  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.614476  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:06.614481  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:06.614546  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:06.640258  396441 cri.go:89] found id: ""
	I1213 10:52:06.640272  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.640279  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:06.640285  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:06.640341  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:06.666195  396441 cri.go:89] found id: ""
	I1213 10:52:06.666209  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.666216  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:06.666222  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:06.666278  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:06.690768  396441 cri.go:89] found id: ""
	I1213 10:52:06.690781  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.690788  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:06.690793  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:06.690846  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:06.714814  396441 cri.go:89] found id: ""
	I1213 10:52:06.714828  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.714835  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:06.714842  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:06.714852  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:06.779445  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:06.779463  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:06.794405  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:06.794419  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:06.863881  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:06.854615   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.855387   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.857219   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.857866   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.858840   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:06.854615   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.855387   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.857219   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.857866   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.858840   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:06.863893  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:06.863903  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:06.931872  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:06.931893  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:09.461689  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:09.471808  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:09.471866  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:09.498684  396441 cri.go:89] found id: ""
	I1213 10:52:09.498698  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.498705  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:09.498710  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:09.498770  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:09.525226  396441 cri.go:89] found id: ""
	I1213 10:52:09.525240  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.525248  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:09.525253  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:09.525312  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:09.552412  396441 cri.go:89] found id: ""
	I1213 10:52:09.552426  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.552433  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:09.552438  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:09.552496  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:09.581636  396441 cri.go:89] found id: ""
	I1213 10:52:09.581650  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.581657  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:09.581662  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:09.581717  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:09.606899  396441 cri.go:89] found id: ""
	I1213 10:52:09.606913  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.606926  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:09.606931  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:09.606985  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:09.635899  396441 cri.go:89] found id: ""
	I1213 10:52:09.635913  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.635920  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:09.635926  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:09.635990  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:09.660294  396441 cri.go:89] found id: ""
	I1213 10:52:09.660308  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.660315  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:09.660322  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:09.660332  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:09.727938  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:09.727956  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:09.742322  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:09.742337  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:09.806667  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:09.798536   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.798981   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.800481   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.800865   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.802370   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:09.798536   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.798981   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.800481   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.800865   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.802370   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:09.806677  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:09.806688  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:09.873384  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:09.873405  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:12.403419  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:12.413610  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:12.413670  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:12.439264  396441 cri.go:89] found id: ""
	I1213 10:52:12.439277  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.439285  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:12.439290  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:12.439347  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:12.464906  396441 cri.go:89] found id: ""
	I1213 10:52:12.464920  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.464927  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:12.464932  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:12.464988  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:12.498036  396441 cri.go:89] found id: ""
	I1213 10:52:12.498050  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.498057  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:12.498062  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:12.498124  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:12.527408  396441 cri.go:89] found id: ""
	I1213 10:52:12.527424  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.527432  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:12.527437  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:12.527493  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:12.553426  396441 cri.go:89] found id: ""
	I1213 10:52:12.553440  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.553449  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:12.553456  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:12.553512  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:12.577801  396441 cri.go:89] found id: ""
	I1213 10:52:12.577821  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.577829  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:12.577834  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:12.577892  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:12.602596  396441 cri.go:89] found id: ""
	I1213 10:52:12.602610  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.602617  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:12.602625  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:12.602636  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:12.617159  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:12.617175  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:12.679319  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:12.671034   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.671563   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.673241   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.673891   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.675542   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:12.671034   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.671563   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.673241   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.673891   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.675542   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:12.679331  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:12.679344  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:12.750080  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:12.750100  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:12.781595  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:12.781612  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:15.350487  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:15.360659  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:15.360718  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:15.387859  396441 cri.go:89] found id: ""
	I1213 10:52:15.387872  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.387879  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:15.387885  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:15.387938  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:15.414186  396441 cri.go:89] found id: ""
	I1213 10:52:15.414200  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.414207  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:15.414212  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:15.414279  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:15.441078  396441 cri.go:89] found id: ""
	I1213 10:52:15.441093  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.441099  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:15.441105  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:15.441160  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:15.469023  396441 cri.go:89] found id: ""
	I1213 10:52:15.469038  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.469045  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:15.469051  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:15.469107  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:15.497840  396441 cri.go:89] found id: ""
	I1213 10:52:15.497855  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.497862  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:15.497870  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:15.497929  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:15.527216  396441 cri.go:89] found id: ""
	I1213 10:52:15.527240  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.527248  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:15.527253  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:15.527318  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:15.552512  396441 cri.go:89] found id: ""
	I1213 10:52:15.552526  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.552533  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:15.552541  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:15.552551  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:15.566854  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:15.566872  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:15.630069  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:15.622023   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.622578   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.624163   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.624769   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.626104   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:15.622023   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.622578   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.624163   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.624769   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.626104   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:15.630081  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:15.630091  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:15.696860  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:15.696880  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:15.724271  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:15.724287  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:18.289647  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:18.301895  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:18.301952  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:18.337658  396441 cri.go:89] found id: ""
	I1213 10:52:18.337672  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.337679  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:18.337684  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:18.337739  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:18.362954  396441 cri.go:89] found id: ""
	I1213 10:52:18.362968  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.362975  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:18.362980  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:18.363038  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:18.388674  396441 cri.go:89] found id: ""
	I1213 10:52:18.388687  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.388694  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:18.388699  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:18.388759  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:18.420176  396441 cri.go:89] found id: ""
	I1213 10:52:18.420189  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.420196  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:18.420202  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:18.420264  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:18.445491  396441 cri.go:89] found id: ""
	I1213 10:52:18.445505  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.445513  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:18.445518  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:18.445579  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:18.470012  396441 cri.go:89] found id: ""
	I1213 10:52:18.470026  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.470034  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:18.470039  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:18.470097  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:18.495243  396441 cri.go:89] found id: ""
	I1213 10:52:18.495257  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.495264  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:18.495271  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:18.495282  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:18.563479  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:18.563500  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:18.578295  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:18.578311  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:18.646148  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:18.637765   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.638446   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.640058   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.640577   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.642125   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:18.637765   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.638446   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.640058   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.640577   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.642125   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:18.646163  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:18.646174  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:18.718257  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:18.718284  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:21.249994  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:21.259664  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:21.259726  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:21.295330  396441 cri.go:89] found id: ""
	I1213 10:52:21.295344  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.295352  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:21.295359  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:21.295416  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:21.321231  396441 cri.go:89] found id: ""
	I1213 10:52:21.321244  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.321252  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:21.321257  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:21.321315  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:21.352593  396441 cri.go:89] found id: ""
	I1213 10:52:21.352607  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.352615  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:21.352620  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:21.352673  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:21.377931  396441 cri.go:89] found id: ""
	I1213 10:52:21.377946  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.377953  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:21.377959  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:21.378013  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:21.402837  396441 cri.go:89] found id: ""
	I1213 10:52:21.402851  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.402857  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:21.402863  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:21.402917  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:21.431840  396441 cri.go:89] found id: ""
	I1213 10:52:21.431855  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.431862  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:21.431867  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:21.431923  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:21.456743  396441 cri.go:89] found id: ""
	I1213 10:52:21.456757  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.456764  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:21.456772  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:21.456783  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:21.524923  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:21.524943  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:21.539831  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:21.539847  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:21.606862  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:21.598783   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.599644   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.601151   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.601554   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.603029   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:21.598783   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.599644   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.601151   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.601554   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.603029   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:21.606873  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:21.606883  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:21.674639  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:21.674658  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:24.206551  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:24.216405  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:24.216463  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:24.242228  396441 cri.go:89] found id: ""
	I1213 10:52:24.242242  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.242257  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:24.242262  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:24.242323  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:24.267087  396441 cri.go:89] found id: ""
	I1213 10:52:24.267101  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.267108  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:24.267113  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:24.267165  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:24.309002  396441 cri.go:89] found id: ""
	I1213 10:52:24.309015  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.309022  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:24.309027  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:24.309094  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:24.339349  396441 cri.go:89] found id: ""
	I1213 10:52:24.339362  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.339370  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:24.339375  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:24.339432  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:24.368576  396441 cri.go:89] found id: ""
	I1213 10:52:24.368590  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.368597  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:24.368602  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:24.368659  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:24.394642  396441 cri.go:89] found id: ""
	I1213 10:52:24.394656  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.394663  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:24.394669  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:24.394733  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:24.421211  396441 cri.go:89] found id: ""
	I1213 10:52:24.421225  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.421232  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:24.421240  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:24.421250  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:24.487558  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:24.479220   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.479760   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.481451   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.481967   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.483636   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:24.479220   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.479760   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.481451   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.481967   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.483636   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:24.487569  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:24.487579  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:24.558449  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:24.558469  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:24.588318  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:24.588333  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:24.654250  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:24.654270  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:27.169201  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:27.180049  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:27.180109  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:27.206061  396441 cri.go:89] found id: ""
	I1213 10:52:27.206075  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.206082  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:27.206096  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:27.206154  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:27.233191  396441 cri.go:89] found id: ""
	I1213 10:52:27.233205  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.233214  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:27.233219  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:27.233281  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:27.260006  396441 cri.go:89] found id: ""
	I1213 10:52:27.260026  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.260034  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:27.260039  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:27.260097  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:27.297935  396441 cri.go:89] found id: ""
	I1213 10:52:27.297949  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.297956  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:27.297962  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:27.298016  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:27.327550  396441 cri.go:89] found id: ""
	I1213 10:52:27.327564  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.327571  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:27.327576  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:27.327632  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:27.357264  396441 cri.go:89] found id: ""
	I1213 10:52:27.357277  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.357285  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:27.357290  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:27.357345  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:27.386557  396441 cri.go:89] found id: ""
	I1213 10:52:27.386571  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.386579  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:27.386587  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:27.386600  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:27.451879  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:27.451900  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:27.466743  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:27.466762  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:27.534974  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:27.526464   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.527041   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.528790   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.529428   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.530940   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:27.526464   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.527041   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.528790   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.529428   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.530940   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:27.534984  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:27.534996  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:27.603674  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:27.603693  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:30.134007  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:30.145384  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:30.145454  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:30.177035  396441 cri.go:89] found id: ""
	I1213 10:52:30.177050  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.177058  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:30.177063  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:30.177121  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:30.203582  396441 cri.go:89] found id: ""
	I1213 10:52:30.203597  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.203604  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:30.203609  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:30.203689  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:30.230074  396441 cri.go:89] found id: ""
	I1213 10:52:30.230088  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.230106  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:30.230112  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:30.230183  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:30.255406  396441 cri.go:89] found id: ""
	I1213 10:52:30.255431  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.255439  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:30.255445  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:30.255527  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:30.302847  396441 cri.go:89] found id: ""
	I1213 10:52:30.302861  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.302869  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:30.302876  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:30.302931  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:30.345708  396441 cri.go:89] found id: ""
	I1213 10:52:30.345722  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.345730  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:30.345735  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:30.345794  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:30.373285  396441 cri.go:89] found id: ""
	I1213 10:52:30.373298  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.373305  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:30.373313  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:30.373323  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:30.438965  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:30.438984  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:30.453939  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:30.453957  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:30.519205  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:30.509989   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.510631   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.512097   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.512762   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.515602   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:30.509989   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.510631   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.512097   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.512762   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.515602   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:30.519233  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:30.519245  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:30.587307  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:30.587327  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:33.117585  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:33.128213  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:33.128278  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:33.159433  396441 cri.go:89] found id: ""
	I1213 10:52:33.159447  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.159455  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:33.159462  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:33.159561  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:33.188876  396441 cri.go:89] found id: ""
	I1213 10:52:33.188890  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.188898  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:33.188904  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:33.188959  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:33.213013  396441 cri.go:89] found id: ""
	I1213 10:52:33.213026  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.213033  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:33.213038  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:33.213098  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:33.237950  396441 cri.go:89] found id: ""
	I1213 10:52:33.237964  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.237971  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:33.237976  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:33.238030  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:33.262873  396441 cri.go:89] found id: ""
	I1213 10:52:33.262887  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.262894  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:33.262899  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:33.262955  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:33.289230  396441 cri.go:89] found id: ""
	I1213 10:52:33.289243  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.289250  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:33.289256  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:33.289312  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:33.322162  396441 cri.go:89] found id: ""
	I1213 10:52:33.322175  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.322182  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:33.322196  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:33.322206  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:33.350122  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:33.350138  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:33.415463  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:33.415483  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:33.430091  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:33.430108  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:33.492694  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:33.484780   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.485349   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.486880   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.487242   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.488741   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:33.484780   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.485349   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.486880   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.487242   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.488741   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:33.492704  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:33.492713  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:36.059928  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:36.071377  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:36.071452  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:36.097664  396441 cri.go:89] found id: ""
	I1213 10:52:36.097678  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.097685  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:36.097691  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:36.097753  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:36.123266  396441 cri.go:89] found id: ""
	I1213 10:52:36.123280  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.123287  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:36.123292  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:36.123348  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:36.149443  396441 cri.go:89] found id: ""
	I1213 10:52:36.149456  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.149464  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:36.149469  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:36.149525  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:36.174882  396441 cri.go:89] found id: ""
	I1213 10:52:36.174896  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.174903  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:36.174909  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:36.174965  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:36.204325  396441 cri.go:89] found id: ""
	I1213 10:52:36.204348  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.204356  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:36.204362  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:36.204427  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:36.234444  396441 cri.go:89] found id: ""
	I1213 10:52:36.234457  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.234474  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:36.234479  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:36.234550  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:36.259366  396441 cri.go:89] found id: ""
	I1213 10:52:36.259390  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.259397  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:36.259406  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:36.259416  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:36.332816  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:36.332834  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:36.348343  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:36.348362  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:36.412337  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:36.404175   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.404717   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.406173   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.406606   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.408021   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:36.404175   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.404717   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.406173   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.406606   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.408021   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:36.412348  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:36.412358  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:36.480447  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:36.480469  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:39.011418  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:39.022791  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:39.022856  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:39.048926  396441 cri.go:89] found id: ""
	I1213 10:52:39.048939  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.048946  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:39.048951  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:39.049008  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:39.074187  396441 cri.go:89] found id: ""
	I1213 10:52:39.074201  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.074209  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:39.074214  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:39.074274  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:39.099262  396441 cri.go:89] found id: ""
	I1213 10:52:39.099275  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.099282  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:39.099288  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:39.099351  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:39.123854  396441 cri.go:89] found id: ""
	I1213 10:52:39.123868  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.123876  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:39.123881  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:39.123935  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:39.148849  396441 cri.go:89] found id: ""
	I1213 10:52:39.148864  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.148871  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:39.148876  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:39.148937  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:39.178852  396441 cri.go:89] found id: ""
	I1213 10:52:39.178866  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.178873  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:39.178879  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:39.178936  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:39.203878  396441 cri.go:89] found id: ""
	I1213 10:52:39.203892  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.203899  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:39.203907  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:39.203921  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:39.270764  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:39.270783  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:39.286957  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:39.286976  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:39.359682  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:39.351441   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.352404   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.354057   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.354437   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.355940   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:39.351441   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.352404   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.354057   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.354437   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.355940   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:39.359693  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:39.359707  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:39.429853  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:39.429874  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:41.960684  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:41.971667  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:41.971727  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:42.002821  396441 cri.go:89] found id: ""
	I1213 10:52:42.002836  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.002844  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:42.002849  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:42.002914  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:42.045054  396441 cri.go:89] found id: ""
	I1213 10:52:42.045068  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.045075  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:42.045080  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:42.045141  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:42.077836  396441 cri.go:89] found id: ""
	I1213 10:52:42.077852  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.077865  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:42.077871  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:42.077947  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:42.115684  396441 cri.go:89] found id: ""
	I1213 10:52:42.115706  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.115714  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:42.115729  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:42.115828  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:42.147177  396441 cri.go:89] found id: ""
	I1213 10:52:42.147194  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.147202  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:42.147208  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:42.147280  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:42.180144  396441 cri.go:89] found id: ""
	I1213 10:52:42.180165  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.180174  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:42.180181  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:42.180255  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:42.220442  396441 cri.go:89] found id: ""
	I1213 10:52:42.220457  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.220466  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:42.220475  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:42.220486  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:42.297964  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:42.297984  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:42.315552  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:42.315571  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:42.388538  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:42.380217   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.380830   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.382313   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.382956   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.384571   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:42.380217   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.380830   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.382313   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.382956   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.384571   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:42.388548  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:42.388558  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:42.457255  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:42.457276  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:44.987527  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:44.999384  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:44.999443  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:45.050333  396441 cri.go:89] found id: ""
	I1213 10:52:45.050351  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.050366  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:45.050372  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:45.050449  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:45.102093  396441 cri.go:89] found id: ""
	I1213 10:52:45.102110  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.102126  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:45.102132  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:45.102218  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:45.141159  396441 cri.go:89] found id: ""
	I1213 10:52:45.141176  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.141184  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:45.141190  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:45.141265  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:45.181959  396441 cri.go:89] found id: ""
	I1213 10:52:45.181976  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.181994  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:45.182000  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:45.182074  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:45.231005  396441 cri.go:89] found id: ""
	I1213 10:52:45.231020  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.231027  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:45.231033  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:45.231103  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:45.269802  396441 cri.go:89] found id: ""
	I1213 10:52:45.269816  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.269824  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:45.269829  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:45.269906  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:45.302267  396441 cri.go:89] found id: ""
	I1213 10:52:45.302281  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.302289  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:45.302297  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:45.302307  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:45.375709  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:45.375731  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:45.390641  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:45.390662  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:45.456742  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:45.449052   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.449482   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.451067   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.451394   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.452876   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:45.449052   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.449482   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.451067   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.451394   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.452876   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:45.456753  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:45.456763  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:45.525649  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:45.525668  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:48.060311  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:48.071648  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:48.071715  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:48.102851  396441 cri.go:89] found id: ""
	I1213 10:52:48.102865  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.102872  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:48.102878  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:48.102948  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:48.128470  396441 cri.go:89] found id: ""
	I1213 10:52:48.128485  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.128492  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:48.128499  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:48.128556  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:48.155177  396441 cri.go:89] found id: ""
	I1213 10:52:48.155197  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.155205  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:48.155210  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:48.155265  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:48.182358  396441 cri.go:89] found id: ""
	I1213 10:52:48.182373  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.182380  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:48.182385  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:48.182447  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:48.208531  396441 cri.go:89] found id: ""
	I1213 10:52:48.208550  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.208557  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:48.208562  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:48.208616  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:48.234008  396441 cri.go:89] found id: ""
	I1213 10:52:48.234023  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.234031  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:48.234036  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:48.234093  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:48.261447  396441 cri.go:89] found id: ""
	I1213 10:52:48.261461  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.261469  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:48.261480  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:48.261492  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:48.278413  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:48.278429  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:48.358811  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:48.350678   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.351326   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.352876   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.353394   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.354912   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:48.350678   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.351326   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.352876   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.353394   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.354912   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:48.358821  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:48.358832  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:48.433414  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:48.433443  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:48.466431  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:48.466452  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:51.033966  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:51.044258  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:51.044317  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:51.072809  396441 cri.go:89] found id: ""
	I1213 10:52:51.072823  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.072830  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:51.072836  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:51.072895  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:51.102333  396441 cri.go:89] found id: ""
	I1213 10:52:51.102346  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.102353  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:51.102358  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:51.102415  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:51.128414  396441 cri.go:89] found id: ""
	I1213 10:52:51.128427  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.128434  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:51.128439  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:51.128494  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:51.154902  396441 cri.go:89] found id: ""
	I1213 10:52:51.154916  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.154923  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:51.154928  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:51.154983  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:51.182112  396441 cri.go:89] found id: ""
	I1213 10:52:51.182126  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.182133  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:51.182143  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:51.182197  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:51.207919  396441 cri.go:89] found id: ""
	I1213 10:52:51.207933  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.207941  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:51.207946  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:51.208001  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:51.234193  396441 cri.go:89] found id: ""
	I1213 10:52:51.234207  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.234214  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:51.234222  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:51.234238  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:51.303042  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:51.303060  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:51.321366  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:51.321383  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:51.393364  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:51.385234   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.385964   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.387481   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.387938   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.389445   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:51.385234   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.385964   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.387481   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.387938   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.389445   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:51.393375  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:51.393385  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:51.461747  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:51.461768  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:53.992488  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:54.002605  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:54.002667  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:54.037835  396441 cri.go:89] found id: ""
	I1213 10:52:54.037849  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.037857  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:54.037862  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:54.037934  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:54.066982  396441 cri.go:89] found id: ""
	I1213 10:52:54.066998  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.067009  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:54.067015  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:54.067074  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:54.093461  396441 cri.go:89] found id: ""
	I1213 10:52:54.093475  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.093482  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:54.093487  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:54.093544  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:54.123249  396441 cri.go:89] found id: ""
	I1213 10:52:54.123263  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.123271  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:54.123276  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:54.123333  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:54.150103  396441 cri.go:89] found id: ""
	I1213 10:52:54.150116  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.150124  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:54.150130  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:54.150186  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:54.176271  396441 cri.go:89] found id: ""
	I1213 10:52:54.176285  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.176291  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:54.176296  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:54.176355  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:54.204655  396441 cri.go:89] found id: ""
	I1213 10:52:54.204669  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.204676  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:54.204684  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:54.204695  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:54.270252  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:54.259997   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.260697   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.262376   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.262983   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.264572   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:54.259997   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.260697   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.262376   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.262983   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.264572   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:54.270262  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:54.270272  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:54.345996  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:54.346016  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:54.383713  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:54.383730  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:54.450349  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:54.450368  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:56.966888  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:56.976557  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:56.976616  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:57.007803  396441 cri.go:89] found id: ""
	I1213 10:52:57.007828  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.007836  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:57.007842  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:57.007910  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:57.035051  396441 cri.go:89] found id: ""
	I1213 10:52:57.035065  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.035073  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:57.035078  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:57.035137  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:57.060632  396441 cri.go:89] found id: ""
	I1213 10:52:57.060645  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.060652  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:57.060657  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:57.060716  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:57.090660  396441 cri.go:89] found id: ""
	I1213 10:52:57.090674  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.090681  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:57.090686  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:57.090741  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:57.115624  396441 cri.go:89] found id: ""
	I1213 10:52:57.115638  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.115645  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:57.115650  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:57.115718  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:57.146066  396441 cri.go:89] found id: ""
	I1213 10:52:57.146080  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.146087  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:57.146093  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:57.146147  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:57.174574  396441 cri.go:89] found id: ""
	I1213 10:52:57.174589  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.174596  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:57.174604  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:57.174614  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:57.202471  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:57.202487  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:57.267828  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:57.267852  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:57.284906  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:57.284922  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:57.357618  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:57.350279   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.350835   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.351877   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.352319   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.353722   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:57.350279   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.350835   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.351877   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.352319   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.353722   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:57.357629  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:57.357641  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:59.928373  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:59.939417  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:59.939503  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:59.968871  396441 cri.go:89] found id: ""
	I1213 10:52:59.968885  396441 logs.go:282] 0 containers: []
	W1213 10:52:59.968892  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:59.968897  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:59.968952  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:59.994167  396441 cri.go:89] found id: ""
	I1213 10:52:59.994181  396441 logs.go:282] 0 containers: []
	W1213 10:52:59.994188  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:59.994192  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:59.994244  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:00.051356  396441 cri.go:89] found id: ""
	I1213 10:53:00.051372  396441 logs.go:282] 0 containers: []
	W1213 10:53:00.051380  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:00.051386  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:00.051453  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:00.143874  396441 cri.go:89] found id: ""
	I1213 10:53:00.143902  396441 logs.go:282] 0 containers: []
	W1213 10:53:00.143910  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:00.143915  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:00.143990  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:00.245636  396441 cri.go:89] found id: ""
	I1213 10:53:00.245660  396441 logs.go:282] 0 containers: []
	W1213 10:53:00.245669  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:00.245676  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:00.245762  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:00.304351  396441 cri.go:89] found id: ""
	I1213 10:53:00.304370  396441 logs.go:282] 0 containers: []
	W1213 10:53:00.304378  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:00.304384  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:00.304463  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:00.342460  396441 cri.go:89] found id: ""
	I1213 10:53:00.342483  396441 logs.go:282] 0 containers: []
	W1213 10:53:00.342492  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:00.342503  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:00.342552  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:00.422913  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:00.413257   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.414124   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.416191   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.416801   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.418644   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:00.413257   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.414124   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.416191   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.416801   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.418644   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:00.422924  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:00.422935  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:00.494010  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:00.494031  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:00.523384  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:00.523401  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:00.590600  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:00.590620  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:03.105926  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:03.116415  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:03.116476  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:03.148167  396441 cri.go:89] found id: ""
	I1213 10:53:03.148181  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.148189  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:03.148195  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:03.148255  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:03.173610  396441 cri.go:89] found id: ""
	I1213 10:53:03.173624  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.173633  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:03.173638  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:03.173698  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:03.198406  396441 cri.go:89] found id: ""
	I1213 10:53:03.198420  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.198427  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:03.198432  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:03.198494  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:03.228196  396441 cri.go:89] found id: ""
	I1213 10:53:03.228210  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.228218  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:03.228223  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:03.228284  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:03.258506  396441 cri.go:89] found id: ""
	I1213 10:53:03.258539  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.258547  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:03.258552  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:03.258617  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:03.293938  396441 cri.go:89] found id: ""
	I1213 10:53:03.293951  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.293968  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:03.293973  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:03.294029  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:03.322417  396441 cri.go:89] found id: ""
	I1213 10:53:03.322441  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.322448  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:03.322456  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:03.322467  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:03.338484  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:03.338500  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:03.404903  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:03.396282   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.397052   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.398807   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.399322   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.400968   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:03.396282   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.397052   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.398807   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.399322   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.400968   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:03.404913  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:03.404930  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:03.476102  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:03.476122  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:03.508468  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:03.508484  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:06.073576  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:06.084007  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:06.084073  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:06.110819  396441 cri.go:89] found id: ""
	I1213 10:53:06.110834  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.110841  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:06.110847  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:06.110915  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:06.136257  396441 cri.go:89] found id: ""
	I1213 10:53:06.136271  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.136278  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:06.136286  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:06.136344  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:06.162392  396441 cri.go:89] found id: ""
	I1213 10:53:06.162406  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.162413  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:06.162419  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:06.162479  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:06.191163  396441 cri.go:89] found id: ""
	I1213 10:53:06.191178  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.191185  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:06.191190  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:06.191244  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:06.217747  396441 cri.go:89] found id: ""
	I1213 10:53:06.217761  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.217769  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:06.217774  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:06.217829  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:06.242838  396441 cri.go:89] found id: ""
	I1213 10:53:06.242851  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.242858  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:06.242864  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:06.242918  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:06.267811  396441 cri.go:89] found id: ""
	I1213 10:53:06.267831  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.267838  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:06.267846  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:06.267857  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:06.351297  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:06.343103   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.343800   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.345275   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.345736   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.347181   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:06.343103   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.343800   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.345275   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.345736   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.347181   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:06.351310  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:06.351321  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:06.418677  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:06.418696  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:06.456760  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:06.456778  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:06.525341  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:06.525362  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:09.044095  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:09.054348  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:09.054410  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:09.081344  396441 cri.go:89] found id: ""
	I1213 10:53:09.081358  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.081365  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:09.081376  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:09.081434  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:09.107998  396441 cri.go:89] found id: ""
	I1213 10:53:09.108012  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.108019  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:09.108024  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:09.108084  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:09.133582  396441 cri.go:89] found id: ""
	I1213 10:53:09.133596  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.133603  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:09.133608  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:09.133666  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:09.158646  396441 cri.go:89] found id: ""
	I1213 10:53:09.158669  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.158677  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:09.158682  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:09.158746  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:09.184013  396441 cri.go:89] found id: ""
	I1213 10:53:09.184028  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.184035  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:09.184040  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:09.184097  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:09.210338  396441 cri.go:89] found id: ""
	I1213 10:53:09.210352  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.210370  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:09.210376  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:09.210434  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:09.236029  396441 cri.go:89] found id: ""
	I1213 10:53:09.236045  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.236052  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:09.236059  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:09.236069  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:09.310970  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:09.298395   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.303364   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.304232   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.305803   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.306103   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:09.298395   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.303364   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.304232   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.305803   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.306103   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:09.310981  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:09.310992  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:09.380678  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:09.380700  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:09.413354  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:09.413371  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:09.481585  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:09.481603  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:11.996259  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:12.009133  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:12.009217  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:12.044141  396441 cri.go:89] found id: ""
	I1213 10:53:12.044157  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.044164  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:12.044170  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:12.044230  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:12.070547  396441 cri.go:89] found id: ""
	I1213 10:53:12.070579  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.070587  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:12.070598  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:12.070664  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:12.095879  396441 cri.go:89] found id: ""
	I1213 10:53:12.095893  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.095900  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:12.095905  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:12.095965  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:12.125533  396441 cri.go:89] found id: ""
	I1213 10:53:12.125547  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.125554  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:12.125559  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:12.125618  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:12.151281  396441 cri.go:89] found id: ""
	I1213 10:53:12.151303  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.151311  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:12.151317  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:12.151385  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:12.176331  396441 cri.go:89] found id: ""
	I1213 10:53:12.176353  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.176361  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:12.176366  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:12.176433  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:12.202465  396441 cri.go:89] found id: ""
	I1213 10:53:12.202486  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.202493  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:12.202500  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:12.202523  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:12.268244  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:12.268263  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:12.285364  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:12.285379  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:12.357173  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:12.347625   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.348521   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.350379   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.350883   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.352352   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:12.347625   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.348521   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.350379   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.350883   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.352352   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:12.357192  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:12.357204  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:12.424809  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:12.424830  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:14.955688  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:14.967057  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:14.967115  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:14.993136  396441 cri.go:89] found id: ""
	I1213 10:53:14.993150  396441 logs.go:282] 0 containers: []
	W1213 10:53:14.993157  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:14.993163  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:14.993220  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:15.028691  396441 cri.go:89] found id: ""
	I1213 10:53:15.028707  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.028722  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:15.028728  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:15.028794  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:15.056676  396441 cri.go:89] found id: ""
	I1213 10:53:15.056705  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.056732  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:15.056739  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:15.056800  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:15.085199  396441 cri.go:89] found id: ""
	I1213 10:53:15.085213  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.085221  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:15.085226  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:15.085288  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:15.113074  396441 cri.go:89] found id: ""
	I1213 10:53:15.113088  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.113095  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:15.113101  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:15.113159  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:15.142568  396441 cri.go:89] found id: ""
	I1213 10:53:15.142581  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.142589  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:15.142595  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:15.142655  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:15.167430  396441 cri.go:89] found id: ""
	I1213 10:53:15.167443  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.167450  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:15.167458  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:15.167471  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:15.233925  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:15.233946  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:15.248849  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:15.248866  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:15.332377  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:15.324322   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.325030   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.326689   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.327007   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.328464   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:15.324322   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.325030   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.326689   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.327007   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.328464   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:15.332397  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:15.332409  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:15.401263  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:15.401283  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:17.930625  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:17.940643  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:17.940703  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:17.965657  396441 cri.go:89] found id: ""
	I1213 10:53:17.965671  396441 logs.go:282] 0 containers: []
	W1213 10:53:17.965678  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:17.965683  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:17.965740  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:17.990612  396441 cri.go:89] found id: ""
	I1213 10:53:17.990635  396441 logs.go:282] 0 containers: []
	W1213 10:53:17.990642  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:17.990648  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:17.990723  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:18.025034  396441 cri.go:89] found id: ""
	I1213 10:53:18.025049  396441 logs.go:282] 0 containers: []
	W1213 10:53:18.025057  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:18.025063  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:18.025123  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:18.052589  396441 cri.go:89] found id: ""
	I1213 10:53:18.052611  396441 logs.go:282] 0 containers: []
	W1213 10:53:18.052619  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:18.052625  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:18.052683  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:18.079906  396441 cri.go:89] found id: ""
	I1213 10:53:18.079921  396441 logs.go:282] 0 containers: []
	W1213 10:53:18.079929  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:18.079935  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:18.079997  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:18.107302  396441 cri.go:89] found id: ""
	I1213 10:53:18.107327  396441 logs.go:282] 0 containers: []
	W1213 10:53:18.107335  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:18.107340  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:18.107409  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:18.135776  396441 cri.go:89] found id: ""
	I1213 10:53:18.135790  396441 logs.go:282] 0 containers: []
	W1213 10:53:18.135797  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:18.135805  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:18.135815  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:18.153173  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:18.153189  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:18.221544  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:18.213144   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.213793   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.215340   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.215838   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.217560   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:18.213144   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.213793   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.215340   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.215838   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.217560   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:18.221554  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:18.221565  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:18.296047  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:18.296072  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:18.330043  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:18.330063  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:20.909395  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:20.919737  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:20.919799  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:20.946000  396441 cri.go:89] found id: ""
	I1213 10:53:20.946014  396441 logs.go:282] 0 containers: []
	W1213 10:53:20.946022  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:20.946027  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:20.946084  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:20.975734  396441 cri.go:89] found id: ""
	I1213 10:53:20.975749  396441 logs.go:282] 0 containers: []
	W1213 10:53:20.975756  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:20.975761  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:20.975815  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:21.000961  396441 cri.go:89] found id: ""
	I1213 10:53:21.000976  396441 logs.go:282] 0 containers: []
	W1213 10:53:21.000983  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:21.000988  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:21.001043  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:21.027875  396441 cri.go:89] found id: ""
	I1213 10:53:21.027889  396441 logs.go:282] 0 containers: []
	W1213 10:53:21.027896  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:21.027902  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:21.027963  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:21.053113  396441 cri.go:89] found id: ""
	I1213 10:53:21.053127  396441 logs.go:282] 0 containers: []
	W1213 10:53:21.053134  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:21.053140  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:21.053198  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:21.078404  396441 cri.go:89] found id: ""
	I1213 10:53:21.078418  396441 logs.go:282] 0 containers: []
	W1213 10:53:21.078425  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:21.078430  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:21.078484  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:21.103558  396441 cri.go:89] found id: ""
	I1213 10:53:21.103571  396441 logs.go:282] 0 containers: []
	W1213 10:53:21.103579  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:21.103592  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:21.103604  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:21.172527  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:21.172545  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:21.187768  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:21.187785  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:21.256696  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:21.248073   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.249061   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.249753   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.251203   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.251711   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:21.248073   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.249061   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.249753   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.251203   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.251711   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:21.256707  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:21.256717  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:21.327132  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:21.327151  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:23.867087  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:23.877218  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:23.877278  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:23.901809  396441 cri.go:89] found id: ""
	I1213 10:53:23.901824  396441 logs.go:282] 0 containers: []
	W1213 10:53:23.901831  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:23.901836  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:23.901892  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:23.928024  396441 cri.go:89] found id: ""
	I1213 10:53:23.928038  396441 logs.go:282] 0 containers: []
	W1213 10:53:23.928044  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:23.928051  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:23.928104  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:23.953141  396441 cri.go:89] found id: ""
	I1213 10:53:23.953154  396441 logs.go:282] 0 containers: []
	W1213 10:53:23.953161  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:23.953166  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:23.953223  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:23.981670  396441 cri.go:89] found id: ""
	I1213 10:53:23.981684  396441 logs.go:282] 0 containers: []
	W1213 10:53:23.981691  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:23.981696  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:23.981754  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:24.014889  396441 cri.go:89] found id: ""
	I1213 10:53:24.014904  396441 logs.go:282] 0 containers: []
	W1213 10:53:24.014912  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:24.014917  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:24.014982  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:24.041025  396441 cri.go:89] found id: ""
	I1213 10:53:24.041040  396441 logs.go:282] 0 containers: []
	W1213 10:53:24.041047  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:24.041052  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:24.041110  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:24.068555  396441 cri.go:89] found id: ""
	I1213 10:53:24.068570  396441 logs.go:282] 0 containers: []
	W1213 10:53:24.068578  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:24.068586  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:24.068596  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:24.082803  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:24.082819  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:24.145822  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:24.137676   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.138215   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.139944   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.140400   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.141928   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:24.137676   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.138215   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.139944   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.140400   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.141928   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:24.145832  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:24.145843  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:24.213727  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:24.213747  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:24.241111  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:24.241126  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:26.808221  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:26.818590  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:26.818659  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:26.848553  396441 cri.go:89] found id: ""
	I1213 10:53:26.848568  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.848575  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:26.848580  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:26.848636  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:26.878256  396441 cri.go:89] found id: ""
	I1213 10:53:26.878274  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.878281  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:26.878288  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:26.878343  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:26.905040  396441 cri.go:89] found id: ""
	I1213 10:53:26.905054  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.905061  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:26.905067  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:26.905140  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:26.933587  396441 cri.go:89] found id: ""
	I1213 10:53:26.933601  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.933608  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:26.933613  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:26.933669  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:26.958154  396441 cri.go:89] found id: ""
	I1213 10:53:26.958167  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.958175  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:26.958180  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:26.958240  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:26.986142  396441 cri.go:89] found id: ""
	I1213 10:53:26.986156  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.986164  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:26.986169  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:26.986222  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:27.013602  396441 cri.go:89] found id: ""
	I1213 10:53:27.013617  396441 logs.go:282] 0 containers: []
	W1213 10:53:27.013625  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:27.013633  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:27.013643  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:27.080830  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:27.080850  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:27.109824  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:27.109839  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:27.175975  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:27.176002  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:27.190437  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:27.190456  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:27.254921  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:27.245674   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.246416   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.248026   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.248660   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.250260   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:27.245674   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.246416   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.248026   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.248660   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.250260   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:29.755755  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:29.767564  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:29.767645  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:29.797908  396441 cri.go:89] found id: ""
	I1213 10:53:29.797922  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.797929  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:29.797935  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:29.797994  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:29.824494  396441 cri.go:89] found id: ""
	I1213 10:53:29.824508  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.824516  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:29.824521  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:29.824577  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:29.853869  396441 cri.go:89] found id: ""
	I1213 10:53:29.853883  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.853890  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:29.853895  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:29.853951  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:29.883491  396441 cri.go:89] found id: ""
	I1213 10:53:29.883504  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.883526  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:29.883531  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:29.883590  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:29.908921  396441 cri.go:89] found id: ""
	I1213 10:53:29.908935  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.908943  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:29.908948  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:29.909004  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:29.938464  396441 cri.go:89] found id: ""
	I1213 10:53:29.938478  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.938485  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:29.938490  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:29.938568  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:29.964642  396441 cri.go:89] found id: ""
	I1213 10:53:29.964658  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.964665  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:29.964672  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:29.964682  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:30.032663  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:30.032688  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:30.050167  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:30.050188  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:30.119376  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:30.110113   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.110970   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.112364   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.113033   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.114675   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:30.110113   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.110970   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.112364   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.113033   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.114675   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:30.119387  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:30.119398  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:30.188285  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:30.188307  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:32.723464  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:32.734250  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:32.734319  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:32.760154  396441 cri.go:89] found id: ""
	I1213 10:53:32.760168  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.760175  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:32.760180  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:32.760237  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:32.788893  396441 cri.go:89] found id: ""
	I1213 10:53:32.788906  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.788913  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:32.788918  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:32.788973  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:32.815801  396441 cri.go:89] found id: ""
	I1213 10:53:32.815815  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.815822  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:32.815827  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:32.815884  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:32.840740  396441 cri.go:89] found id: ""
	I1213 10:53:32.840754  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.840761  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:32.840766  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:32.840820  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:32.865881  396441 cri.go:89] found id: ""
	I1213 10:53:32.865895  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.865902  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:32.865907  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:32.865962  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:32.891687  396441 cri.go:89] found id: ""
	I1213 10:53:32.891702  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.891709  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:32.891714  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:32.891768  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:32.918219  396441 cri.go:89] found id: ""
	I1213 10:53:32.918233  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.918240  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:32.918248  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:32.918271  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:32.982730  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:32.974018   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.974750   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.976353   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.976815   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.978478   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:32.974018   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.974750   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.976353   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.976815   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.978478   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:32.982749  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:32.982759  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:33.055443  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:33.055464  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:33.092574  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:33.092592  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:33.159246  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:33.159268  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:35.674110  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:35.683841  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:35.683897  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:35.708708  396441 cri.go:89] found id: ""
	I1213 10:53:35.708722  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.708729  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:35.708735  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:35.708792  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:35.733638  396441 cri.go:89] found id: ""
	I1213 10:53:35.733652  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.733659  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:35.733665  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:35.733725  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:35.759232  396441 cri.go:89] found id: ""
	I1213 10:53:35.759246  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.759254  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:35.759259  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:35.759318  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:35.787542  396441 cri.go:89] found id: ""
	I1213 10:53:35.787557  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.787564  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:35.787569  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:35.787625  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:35.811703  396441 cri.go:89] found id: ""
	I1213 10:53:35.811716  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.811724  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:35.811729  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:35.811786  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:35.837035  396441 cri.go:89] found id: ""
	I1213 10:53:35.837049  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.837057  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:35.837062  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:35.837121  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:35.863392  396441 cri.go:89] found id: ""
	I1213 10:53:35.863406  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.863414  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:35.863421  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:35.863431  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:35.928750  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:35.928771  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:35.943680  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:35.943696  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:36.014992  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:36.001506   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.002280   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.004784   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.005213   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.007095   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:36.001506   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.002280   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.004784   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.005213   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.007095   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:36.015006  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:36.015018  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:36.088705  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:36.088726  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:38.618865  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:38.628567  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:38.628627  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:38.657828  396441 cri.go:89] found id: ""
	I1213 10:53:38.657842  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.657853  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:38.657859  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:38.657916  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:38.686067  396441 cri.go:89] found id: ""
	I1213 10:53:38.686081  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.686088  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:38.686093  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:38.686148  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:38.723682  396441 cri.go:89] found id: ""
	I1213 10:53:38.723696  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.723703  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:38.723709  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:38.723764  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:38.749537  396441 cri.go:89] found id: ""
	I1213 10:53:38.749552  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.749559  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:38.749564  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:38.749617  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:38.774109  396441 cri.go:89] found id: ""
	I1213 10:53:38.774129  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.774136  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:38.774141  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:38.774198  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:38.799225  396441 cri.go:89] found id: ""
	I1213 10:53:38.799239  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.799263  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:38.799269  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:38.799323  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:38.828154  396441 cri.go:89] found id: ""
	I1213 10:53:38.828168  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.828176  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:38.828183  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:38.828192  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:38.892547  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:38.892565  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:38.907245  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:38.907267  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:38.971825  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:38.963507   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.964137   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.965780   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.966348   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.968042   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:38.963507   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.964137   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.965780   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.966348   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.968042   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:38.971835  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:38.971847  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:39.041005  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:39.041026  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:41.575691  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:41.585703  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:41.585767  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:41.611468  396441 cri.go:89] found id: ""
	I1213 10:53:41.611482  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.611490  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:41.611495  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:41.611582  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:41.637775  396441 cri.go:89] found id: ""
	I1213 10:53:41.637790  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.637797  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:41.637802  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:41.637865  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:41.666669  396441 cri.go:89] found id: ""
	I1213 10:53:41.666683  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.666691  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:41.666696  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:41.666750  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:41.691305  396441 cri.go:89] found id: ""
	I1213 10:53:41.691328  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.691336  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:41.691341  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:41.691403  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:41.716485  396441 cri.go:89] found id: ""
	I1213 10:53:41.716506  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.716514  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:41.716519  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:41.716576  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:41.745432  396441 cri.go:89] found id: ""
	I1213 10:53:41.745446  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.745453  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:41.745458  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:41.745515  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:41.770118  396441 cri.go:89] found id: ""
	I1213 10:53:41.770131  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.770138  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:41.770156  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:41.770165  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:41.799454  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:41.799470  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:41.863838  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:41.863858  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:41.878805  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:41.878821  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:41.944990  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:41.935691   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.936395   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.938023   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.938699   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.940322   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:41.935691   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.936395   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.938023   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.938699   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.940322   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:41.945000  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:41.945011  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:44.513654  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:44.523863  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:44.523923  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:44.556878  396441 cri.go:89] found id: ""
	I1213 10:53:44.556891  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.556912  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:44.556917  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:44.556984  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:44.592098  396441 cri.go:89] found id: ""
	I1213 10:53:44.592111  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.592128  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:44.592133  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:44.592200  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:44.620862  396441 cri.go:89] found id: ""
	I1213 10:53:44.620875  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.620883  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:44.620898  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:44.620965  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:44.652601  396441 cri.go:89] found id: ""
	I1213 10:53:44.652615  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.652622  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:44.652627  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:44.652683  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:44.678239  396441 cri.go:89] found id: ""
	I1213 10:53:44.678253  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.678269  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:44.678275  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:44.678340  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:44.703917  396441 cri.go:89] found id: ""
	I1213 10:53:44.703930  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.703938  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:44.703943  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:44.704002  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:44.730484  396441 cri.go:89] found id: ""
	I1213 10:53:44.730497  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.730505  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:44.730523  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:44.730538  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:44.744828  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:44.744844  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:44.809441  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:44.801057   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.801582   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.803183   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.803696   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.805516   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:44.801057   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.801582   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.803183   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.803696   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.805516   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:44.809451  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:44.809463  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:44.877771  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:44.877793  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:44.911088  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:44.911103  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:47.481207  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:47.491256  396441 kubeadm.go:602] duration metric: took 4m3.474830683s to restartPrimaryControlPlane
	W1213 10:53:47.491316  396441 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 10:53:47.491392  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 10:53:47.914152  396441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:53:47.926543  396441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:53:47.934327  396441 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:53:47.934378  396441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:53:47.941688  396441 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:53:47.941697  396441 kubeadm.go:158] found existing configuration files:
	
	I1213 10:53:47.941743  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:53:47.949173  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:53:47.949232  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:53:47.956350  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:53:47.963878  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:53:47.963941  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:53:47.971122  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:53:47.978729  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:53:47.978780  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:53:47.985856  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:53:47.993466  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:53:47.993519  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:53:48.001100  396441 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:53:48.045742  396441 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:53:48.045801  396441 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:53:48.119066  396441 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:53:48.119144  396441 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:53:48.119191  396441 kubeadm.go:319] OS: Linux
	I1213 10:53:48.119235  396441 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:53:48.119293  396441 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:53:48.119348  396441 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:53:48.119396  396441 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:53:48.119453  396441 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:53:48.119544  396441 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:53:48.119589  396441 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:53:48.119648  396441 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:53:48.119703  396441 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:53:48.191760  396441 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:53:48.191864  396441 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:53:48.191953  396441 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:53:48.199827  396441 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:53:48.203364  396441 out.go:252]   - Generating certificates and keys ...
	I1213 10:53:48.203457  396441 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:53:48.203575  396441 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:53:48.203646  396441 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:53:48.203710  396441 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:53:48.203925  396441 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:53:48.203983  396441 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:53:48.204042  396441 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:53:48.204098  396441 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:53:48.204167  396441 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:53:48.204241  396441 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:53:48.204278  396441 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:53:48.204329  396441 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:53:48.358581  396441 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:53:48.732777  396441 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:53:49.132208  396441 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:53:49.321084  396441 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:53:49.412268  396441 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:53:49.412908  396441 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:53:49.417021  396441 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:53:49.420254  396441 out.go:252]   - Booting up control plane ...
	I1213 10:53:49.420359  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:53:49.420477  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:53:49.421364  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:53:49.437192  396441 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:53:49.437314  396441 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:53:49.445560  396441 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:53:49.445850  396441 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:53:49.446065  396441 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:53:49.579988  396441 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:53:49.580095  396441 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:57:49.575955  396441 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000564023s
	I1213 10:57:49.575972  396441 kubeadm.go:319] 
	I1213 10:57:49.576025  396441 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:57:49.576055  396441 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:57:49.576153  396441 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:57:49.576156  396441 kubeadm.go:319] 
	I1213 10:57:49.576253  396441 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:57:49.576282  396441 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:57:49.576311  396441 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:57:49.576314  396441 kubeadm.go:319] 
	I1213 10:57:49.584496  396441 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:57:49.584979  396441 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:57:49.585109  396441 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:57:49.585360  396441 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:57:49.585367  396441 kubeadm.go:319] 
	I1213 10:57:49.585449  396441 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 10:57:49.585544  396441 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000564023s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 10:57:49.585636  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 10:57:50.015805  396441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:57:50.030733  396441 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:57:50.030794  396441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:57:50.040503  396441 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:57:50.040514  396441 kubeadm.go:158] found existing configuration files:
	
	I1213 10:57:50.040573  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:57:50.049098  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:57:50.049158  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:57:50.057150  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:57:50.066557  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:57:50.066659  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:57:50.074920  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:57:50.083448  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:57:50.083507  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:57:50.092213  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:57:50.100606  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:57:50.100667  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:57:50.108705  396441 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:57:50.150598  396441 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:57:50.150922  396441 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:57:50.222346  396441 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:57:50.222407  396441 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:57:50.222441  396441 kubeadm.go:319] OS: Linux
	I1213 10:57:50.222482  396441 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:57:50.222526  396441 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:57:50.222570  396441 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:57:50.222621  396441 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:57:50.222666  396441 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:57:50.222718  396441 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:57:50.222760  396441 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:57:50.222804  396441 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:57:50.222847  396441 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:57:50.290176  396441 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:57:50.290279  396441 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:57:50.290370  396441 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:57:50.297738  396441 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:57:50.303127  396441 out.go:252]   - Generating certificates and keys ...
	I1213 10:57:50.303239  396441 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:57:50.303307  396441 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:57:50.303384  396441 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:57:50.303444  396441 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:57:50.303589  396441 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:57:50.303642  396441 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:57:50.303705  396441 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:57:50.303769  396441 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:57:50.303843  396441 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:57:50.303915  396441 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:57:50.303952  396441 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:57:50.304007  396441 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:57:50.552022  396441 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:57:50.900706  396441 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:57:50.944600  396441 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:57:51.426451  396441 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:57:51.746824  396441 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:57:51.747542  396441 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:57:51.750376  396441 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:57:51.753437  396441 out.go:252]   - Booting up control plane ...
	I1213 10:57:51.753548  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:57:51.753629  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:57:51.754233  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:57:51.768926  396441 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:57:51.769192  396441 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:57:51.780537  396441 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:57:51.780629  396441 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:57:51.780668  396441 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:57:51.907080  396441 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:57:51.907187  396441 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:01:51.907939  396441 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001143765s
	I1213 11:01:51.907957  396441 kubeadm.go:319] 
	I1213 11:01:51.908010  396441 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:01:51.908040  396441 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:01:51.908138  396441 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:01:51.908141  396441 kubeadm.go:319] 
	I1213 11:01:51.908238  396441 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:01:51.908267  396441 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:01:51.908295  396441 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:01:51.908298  396441 kubeadm.go:319] 
	I1213 11:01:51.911942  396441 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:01:51.912375  396441 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:01:51.912489  396441 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:01:51.912750  396441 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:01:51.912759  396441 kubeadm.go:319] 
	I1213 11:01:51.912853  396441 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:01:51.912889  396441 kubeadm.go:403] duration metric: took 12m7.937442674s to StartCluster
	I1213 11:01:51.912920  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:01:51.912979  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:01:51.938530  396441 cri.go:89] found id: ""
	I1213 11:01:51.938545  396441 logs.go:282] 0 containers: []
	W1213 11:01:51.938552  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:01:51.938558  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:01:51.938614  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:01:51.963977  396441 cri.go:89] found id: ""
	I1213 11:01:51.963991  396441 logs.go:282] 0 containers: []
	W1213 11:01:51.963998  396441 logs.go:284] No container was found matching "etcd"
	I1213 11:01:51.964003  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:01:51.964062  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:01:51.988936  396441 cri.go:89] found id: ""
	I1213 11:01:51.988951  396441 logs.go:282] 0 containers: []
	W1213 11:01:51.988958  396441 logs.go:284] No container was found matching "coredns"
	I1213 11:01:51.988963  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:01:51.989016  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:01:52.019417  396441 cri.go:89] found id: ""
	I1213 11:01:52.019431  396441 logs.go:282] 0 containers: []
	W1213 11:01:52.019439  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:01:52.019444  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:01:52.019504  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:01:52.046337  396441 cri.go:89] found id: ""
	I1213 11:01:52.046352  396441 logs.go:282] 0 containers: []
	W1213 11:01:52.046360  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:01:52.046365  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:01:52.046426  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:01:52.072247  396441 cri.go:89] found id: ""
	I1213 11:01:52.072261  396441 logs.go:282] 0 containers: []
	W1213 11:01:52.072269  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:01:52.072274  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:01:52.072335  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:01:52.098208  396441 cri.go:89] found id: ""
	I1213 11:01:52.098222  396441 logs.go:282] 0 containers: []
	W1213 11:01:52.098230  396441 logs.go:284] No container was found matching "kindnet"
	I1213 11:01:52.098238  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 11:01:52.098248  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:01:52.165245  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 11:01:52.165265  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:01:52.179908  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:01:52.179924  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:01:52.245950  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:01:52.237532   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.238206   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.239883   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.240475   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.242071   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:01:52.237532   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.238206   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.239883   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.240475   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.242071   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:01:52.245965  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:01:52.245974  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:01:52.322777  396441 logs.go:123] Gathering logs for container status ...
	I1213 11:01:52.322795  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 11:01:52.353497  396441 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001143765s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 11:01:52.353528  396441 out.go:285] * 
	W1213 11:01:52.353591  396441 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001143765s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:01:52.353607  396441 out.go:285] * 
	W1213 11:01:52.355785  396441 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:01:52.362615  396441 out.go:203] 
	W1213 11:01:52.366304  396441 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001143765s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:01:52.366353  396441 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 11:01:52.366376  396441 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 11:01:52.369563  396441 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.43259327Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432628568Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432669931Z" level=info msg="Create NRI interface"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432773423Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432782531Z" level=info msg="runtime interface created"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432793805Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432800656Z" level=info msg="runtime interface starting up..."
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432807844Z" level=info msg="starting plugins..."
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432820907Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432883414Z" level=info msg="No systemd watchdog enabled"
	Dec 13 10:49:42 functional-407525 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.19567159Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=c8401471-cf55-4e91-8c5f-25a7803eeff9 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.1966268Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=72a9b02f-646a-4554-ae9a-9e3da3b7ad0c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.197123888Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=9caf3dbd-ac4b-4ee0-a136-15962b2eeea0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.197584529Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=86fa4638-cc37-45ef-b1b9-31efae43690d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.198007073Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=37f9bdfd-077a-4751-a897-e7c971db1d6b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.198454331Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=f02d4db1-79bc-4d79-9072-497dd5c75d43 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.198871681Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=a0158e10-bee2-405d-9643-45512681023c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.293525942Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3fa6c343-c4b6-41b8-a772-00d9ff9f481b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.294225272Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=f29d3de7-c9c2-4c34-9a76-76647c28c359 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.294692649Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=115a2b32-9e68-43c7-90af-1d4450976368 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.295176544Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=cce5b0a2-af51-4974-8c4f-26d3aadd70cb name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.295829785Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=bba9558c-4301-4576-890b-64bddc5af9b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.296320695Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=59bc3a50-c36c-4024-8506-47dbb78201d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.296784429Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=97458369-23f9-4acf-a127-9b41f30c00a3 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:01:53.554825   21187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:53.556052   21187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:53.556810   21187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:53.557806   21187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:53.559408   21187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	[Dec13 10:24] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:25] overlayfs: idmapped layers are currently not supported
	[  +0.081323] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:31] overlayfs: idmapped layers are currently not supported
	[Dec13 10:32] overlayfs: idmapped layers are currently not supported
	[Dec13 10:42] hrtimer: interrupt took 21684953 ns
	[Dec13 10:49] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 11:01:53 up  2:44,  0 user,  load average: 0.03, 0.15, 0.41
	Linux functional-407525 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 11:01:50 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:01:51 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 960.
	Dec 13 11:01:51 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:01:51 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:01:51 functional-407525 kubelet[20993]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:01:51 functional-407525 kubelet[20993]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:01:51 functional-407525 kubelet[20993]: E1213 11:01:51.569902   20993 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:01:51 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:01:51 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:01:52 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 961.
	Dec 13 11:01:52 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:01:52 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:01:52 functional-407525 kubelet[21071]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:01:52 functional-407525 kubelet[21071]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:01:52 functional-407525 kubelet[21071]: E1213 11:01:52.359980   21071 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:01:52 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:01:52 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:01:53 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 13 11:01:53 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:01:53 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:01:53 functional-407525 kubelet[21103]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:01:53 functional-407525 kubelet[21103]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:01:53 functional-407525 kubelet[21103]: E1213 11:01:53.091804   21103 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:01:53 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:01:53 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525: exit status 2 (408.131473ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-407525" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-407525 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-407525 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (58.077932ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-407525 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-407525
helpers_test.go:244: (dbg) docker inspect functional-407525:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	        "Created": "2025-12-13T10:34:59.162458661Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 385126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:34:59.230276401Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hostname",
	        "HostsPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hosts",
	        "LogPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7-json.log",
	        "Name": "/functional-407525",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-407525:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-407525",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	                "LowerDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-407525",
	                "Source": "/var/lib/docker/volumes/functional-407525/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-407525",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-407525",
	                "name.minikube.sigs.k8s.io": "functional-407525",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb8c72e3de62f4751cebe2c5a489ec3040a7f771c4c912b4414d5eb26c67d8e4",
	            "SandboxKey": "/var/run/docker/netns/fb8c72e3de62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-407525": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:c5:1d:c8:5d:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8bb3fce07852261971da0e26f4e28c90471b6da820443a0b657c0bf09d2f7042",
	                    "EndpointID": "3a907b06ccc449fc18f0cf71710374046514d7011757e3e81bb1c73b267fe8c9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-407525",
	                        "7fc3d6bd328a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525: exit status 2 (309.619255ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-371413 image ls --format yaml --alsologtostderr                                                                                        │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ ssh     │ functional-371413 ssh pgrep buildkitd                                                                                                             │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ image   │ functional-371413 image build -t localhost/my-image:functional-371413 testdata/build --alsologtostderr                                            │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image   │ functional-371413 image ls                                                                                                                        │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image   │ functional-371413 image ls --format json --alsologtostderr                                                                                        │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ image   │ functional-371413 image ls --format table --alsologtostderr                                                                                       │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ delete  │ -p functional-371413                                                                                                                              │ functional-371413 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │ 13 Dec 25 10:34 UTC │
	│ start   │ -p functional-407525 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:34 UTC │                     │
	│ start   │ -p functional-407525 --alsologtostderr -v=8                                                                                                       │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:43 UTC │                     │
	│ cache   │ functional-407525 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ functional-407525 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ functional-407525 cache add registry.k8s.io/pause:latest                                                                                          │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ functional-407525 cache add minikube-local-cache-test:functional-407525                                                                           │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ functional-407525 cache delete minikube-local-cache-test:functional-407525                                                                        │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ ssh     │ functional-407525 ssh sudo crictl images                                                                                                          │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ ssh     │ functional-407525 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ ssh     │ functional-407525 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │                     │
	│ cache   │ functional-407525 cache reload                                                                                                                    │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ ssh     │ functional-407525 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ kubectl │ functional-407525 kubectl -- --context functional-407525 get pods                                                                                 │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │                     │
	│ start   │ -p functional-407525 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:49:39
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:49:39.014629  396441 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:49:39.014755  396441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:49:39.014760  396441 out.go:374] Setting ErrFile to fd 2...
	I1213 10:49:39.014764  396441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:49:39.015052  396441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:49:39.015432  396441 out.go:368] Setting JSON to false
	I1213 10:49:39.016356  396441 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9131,"bootTime":1765613848,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:49:39.016423  396441 start.go:143] virtualization:  
	I1213 10:49:39.019850  396441 out.go:179] * [functional-407525] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:49:39.022886  396441 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:49:39.022964  396441 notify.go:221] Checking for updates...
	I1213 10:49:39.029514  396441 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:49:39.032457  396441 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:49:39.035302  396441 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 10:49:39.038191  396441 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:49:39.041178  396441 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:49:39.044626  396441 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:49:39.044735  396441 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:49:39.073132  396441 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:49:39.073240  396441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:49:39.131952  396441 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:49:39.12226015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:49:39.132042  396441 docker.go:319] overlay module found
	I1213 10:49:39.135181  396441 out.go:179] * Using the docker driver based on existing profile
	I1213 10:49:39.138004  396441 start.go:309] selected driver: docker
	I1213 10:49:39.138012  396441 start.go:927] validating driver "docker" against &{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:49:39.138117  396441 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:49:39.138218  396441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:49:39.201683  396441 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:49:39.192871513 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:49:39.202106  396441 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:49:39.202131  396441 cni.go:84] Creating CNI manager for ""
	I1213 10:49:39.202182  396441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:49:39.202230  396441 start.go:353] cluster config:
	{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:49:39.205440  396441 out.go:179] * Starting "functional-407525" primary control-plane node in "functional-407525" cluster
	I1213 10:49:39.208563  396441 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:49:39.211465  396441 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:49:39.214245  396441 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:49:39.214282  396441 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:49:39.214290  396441 cache.go:65] Caching tarball of preloaded images
	I1213 10:49:39.214340  396441 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:49:39.214371  396441 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:49:39.214379  396441 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 10:49:39.214508  396441 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/config.json ...
	I1213 10:49:39.233590  396441 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:49:39.233607  396441 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:49:39.233619  396441 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:49:39.233649  396441 start.go:360] acquireMachinesLock for functional-407525: {Name:mkb9a6ddeb0e93e626919e03dc3c989f045e07da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:49:39.233703  396441 start.go:364] duration metric: took 38.187µs to acquireMachinesLock for "functional-407525"
	I1213 10:49:39.233721  396441 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:49:39.233725  396441 fix.go:54] fixHost starting: 
	I1213 10:49:39.234003  396441 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:49:39.250771  396441 fix.go:112] recreateIfNeeded on functional-407525: state=Running err=<nil>
	W1213 10:49:39.250790  396441 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:49:39.253977  396441 out.go:252] * Updating the running docker "functional-407525" container ...
	I1213 10:49:39.254007  396441 machine.go:94] provisionDockerMachine start ...
	I1213 10:49:39.254089  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:39.270672  396441 main.go:143] libmachine: Using SSH client type: native
	I1213 10:49:39.270992  396441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:49:39.270998  396441 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:49:39.419071  396441 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-407525
	
	I1213 10:49:39.419086  396441 ubuntu.go:182] provisioning hostname "functional-407525"
	I1213 10:49:39.419147  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:39.437001  396441 main.go:143] libmachine: Using SSH client type: native
	I1213 10:49:39.437302  396441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:49:39.437311  396441 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-407525 && echo "functional-407525" | sudo tee /etc/hostname
	I1213 10:49:39.596975  396441 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-407525
	
	I1213 10:49:39.597049  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:39.614748  396441 main.go:143] libmachine: Using SSH client type: native
	I1213 10:49:39.615049  396441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:49:39.615063  396441 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-407525' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-407525/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-407525' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:49:39.763894  396441 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:49:39.763910  396441 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 10:49:39.763930  396441 ubuntu.go:190] setting up certificates
	I1213 10:49:39.763939  396441 provision.go:84] configureAuth start
	I1213 10:49:39.763997  396441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-407525
	I1213 10:49:39.782226  396441 provision.go:143] copyHostCerts
	I1213 10:49:39.782297  396441 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 10:49:39.782308  396441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 10:49:39.782382  396441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 10:49:39.782470  396441 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 10:49:39.782473  396441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 10:49:39.782511  396441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 10:49:39.782561  396441 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 10:49:39.782565  396441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 10:49:39.782587  396441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 10:49:39.782630  396441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.functional-407525 san=[127.0.0.1 192.168.49.2 functional-407525 localhost minikube]
	I1213 10:49:40.264423  396441 provision.go:177] copyRemoteCerts
	I1213 10:49:40.264477  396441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:49:40.264518  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:40.288593  396441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:49:40.395503  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:49:40.413777  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:49:40.432071  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 10:49:40.449556  396441 provision.go:87] duration metric: took 685.604236ms to configureAuth
	I1213 10:49:40.449573  396441 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:49:40.449767  396441 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:49:40.449873  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:40.466720  396441 main.go:143] libmachine: Using SSH client type: native
	I1213 10:49:40.467023  396441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:49:40.467036  396441 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:49:40.812989  396441 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:49:40.813002  396441 machine.go:97] duration metric: took 1.558987505s to provisionDockerMachine
	I1213 10:49:40.813012  396441 start.go:293] postStartSetup for "functional-407525" (driver="docker")
	I1213 10:49:40.813024  396441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:49:40.813085  396441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:49:40.813128  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:40.831095  396441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:49:40.935727  396441 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:49:40.939068  396441 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:49:40.939087  396441 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:49:40.939096  396441 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 10:49:40.939151  396441 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 10:49:40.939232  396441 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 10:49:40.939303  396441 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts -> hosts in /etc/test/nested/copy/356328
	I1213 10:49:40.939344  396441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/356328
	I1213 10:49:40.947101  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 10:49:40.964732  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts --> /etc/test/nested/copy/356328/hosts (40 bytes)
	I1213 10:49:40.981668  396441 start.go:296] duration metric: took 168.641746ms for postStartSetup
	I1213 10:49:40.981767  396441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:49:40.981804  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:41.001302  396441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:49:41.104610  396441 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:49:41.109266  396441 fix.go:56] duration metric: took 1.875532342s for fixHost
	I1213 10:49:41.109282  396441 start.go:83] releasing machines lock for "functional-407525", held for 1.875571571s
	I1213 10:49:41.109349  396441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-407525
	I1213 10:49:41.125841  396441 ssh_runner.go:195] Run: cat /version.json
	I1213 10:49:41.125888  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:41.126157  396441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:49:41.126214  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:41.148984  396441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:49:41.157093  396441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:49:41.349053  396441 ssh_runner.go:195] Run: systemctl --version
	I1213 10:49:41.355137  396441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:49:41.394464  396441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:49:41.399282  396441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:49:41.399342  396441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:49:41.407074  396441 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:49:41.407089  396441 start.go:496] detecting cgroup driver to use...
	I1213 10:49:41.407118  396441 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:49:41.407177  396441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:49:41.422248  396441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:49:41.434814  396441 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:49:41.434866  396441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:49:41.450404  396441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:49:41.463493  396441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:49:41.587216  396441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:49:41.708085  396441 docker.go:234] disabling docker service ...
	I1213 10:49:41.708178  396441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:49:41.726011  396441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:49:41.739486  396441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:49:41.858015  396441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:49:41.976835  396441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:49:41.990126  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:49:42.004186  396441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:49:42.004281  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.015561  396441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:49:42.015636  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.026721  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.037311  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.047280  396441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:49:42.056517  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.067880  396441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.078430  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.089815  396441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:49:42.100093  396441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:49:42.110006  396441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:49:42.245156  396441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:49:42.438084  396441 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:49:42.438159  396441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:49:42.442010  396441 start.go:564] Will wait 60s for crictl version
	I1213 10:49:42.442064  396441 ssh_runner.go:195] Run: which crictl
	I1213 10:49:42.445629  396441 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:49:42.469110  396441 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:49:42.469189  396441 ssh_runner.go:195] Run: crio --version
	I1213 10:49:42.498052  396441 ssh_runner.go:195] Run: crio --version
	I1213 10:49:42.536633  396441 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 10:49:42.539603  396441 cli_runner.go:164] Run: docker network inspect functional-407525 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:49:42.571469  396441 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:49:42.578474  396441 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 10:49:42.582400  396441 kubeadm.go:884] updating cluster {Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:49:42.582534  396441 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:49:42.582601  396441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:49:42.622515  396441 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:49:42.622526  396441 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:49:42.622581  396441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:49:42.647505  396441 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:49:42.647532  396441 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:49:42.647540  396441 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 10:49:42.647645  396441 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-407525 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:49:42.647723  396441 ssh_runner.go:195] Run: crio config
	I1213 10:49:42.707356  396441 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 10:49:42.707414  396441 cni.go:84] Creating CNI manager for ""
	I1213 10:49:42.707422  396441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:49:42.707430  396441 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:49:42.707452  396441 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-407525 NodeName:functional-407525 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:49:42.707613  396441 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-407525"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:49:42.707687  396441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:49:42.715307  396441 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:49:42.715378  396441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:49:42.722969  396441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 10:49:42.735593  396441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:49:42.747933  396441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1213 10:49:42.760993  396441 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:49:42.765274  396441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:49:42.881089  396441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:49:43.272837  396441 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525 for IP: 192.168.49.2
	I1213 10:49:43.272850  396441 certs.go:195] generating shared ca certs ...
	I1213 10:49:43.272866  396441 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:49:43.273008  396441 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 10:49:43.273053  396441 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 10:49:43.273060  396441 certs.go:257] generating profile certs ...
	I1213 10:49:43.273166  396441 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key
	I1213 10:49:43.273224  396441 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key.2185ee04
	I1213 10:49:43.273264  396441 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key
	I1213 10:49:43.273384  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 10:49:43.273414  396441 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 10:49:43.273421  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:49:43.273447  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:49:43.273476  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:49:43.273501  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 10:49:43.273543  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 10:49:43.274189  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:49:43.293217  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:49:43.313563  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:49:43.332800  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:49:43.356461  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:49:43.375598  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:49:43.393764  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:49:43.411407  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 10:49:43.429560  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 10:49:43.447014  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:49:43.465017  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 10:49:43.483101  396441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:49:43.496527  396441 ssh_runner.go:195] Run: openssl version
	I1213 10:49:43.502994  396441 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 10:49:43.510763  396441 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 10:49:43.518540  396441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 10:49:43.522603  396441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 10:49:43.522661  396441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 10:49:43.566464  396441 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:49:43.574093  396441 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:49:43.581656  396441 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:49:43.589363  396441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:49:43.593193  396441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:49:43.593258  396441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:49:43.634480  396441 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:49:43.641940  396441 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 10:49:43.649200  396441 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 10:49:43.656832  396441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 10:49:43.660735  396441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 10:49:43.660790  396441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 10:49:43.706761  396441 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:49:43.714203  396441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:49:43.718007  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:49:43.761049  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:49:43.803978  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:49:43.847848  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:49:43.889404  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:49:43.931127  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:49:43.975457  396441 kubeadm.go:401] StartCluster: {Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:49:43.975563  396441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:49:43.975628  396441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:49:44.005477  396441 cri.go:89] found id: ""
	I1213 10:49:44.005555  396441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:49:44.016406  396441 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:49:44.016416  396441 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:49:44.016469  396441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:49:44.028094  396441 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:49:44.028621  396441 kubeconfig.go:125] found "functional-407525" server: "https://192.168.49.2:8441"
	I1213 10:49:44.029882  396441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:49:44.039549  396441 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 10:35:07.660360228 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 10:49:42.756829139 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 10:49:44.039559  396441 kubeadm.go:1161] stopping kube-system containers ...
	I1213 10:49:44.039569  396441 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 10:49:44.039622  396441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:49:44.076693  396441 cri.go:89] found id: ""
	I1213 10:49:44.076751  396441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 10:49:44.096721  396441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:49:44.104663  396441 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 13 10:39 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 10:39 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 13 10:39 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 10:39 /etc/kubernetes/scheduler.conf
	
	I1213 10:49:44.104731  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:49:44.112473  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:49:44.119938  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:49:44.119996  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:49:44.127386  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:49:44.135062  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:49:44.135113  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:49:44.142352  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:49:44.150087  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:49:44.150140  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:49:44.157689  396441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:49:44.166075  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:49:44.211012  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:49:46.340316  396441 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.129279793s)
	I1213 10:49:46.340374  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:49:46.548065  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:49:46.621630  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:49:46.676051  396441 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:49:46.676117  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:47.176335  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:47.676600  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:48.176220  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:48.676514  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:49.177109  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:49.677029  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:50.176294  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:50.676405  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:51.176207  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:51.677115  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:52.176309  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:52.676843  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:53.176518  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:53.677139  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:54.176272  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:54.677116  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:55.176949  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:55.677027  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:56.176855  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:56.677287  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:57.176985  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:57.676291  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:58.176321  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:58.676311  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:59.177074  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:59.676498  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:00.177244  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:00.676377  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:01.176944  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:01.676370  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:02.176565  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:02.676374  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:03.176325  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:03.677205  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:04.177202  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:04.676995  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:05.176541  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:05.676768  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:06.176328  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:06.676318  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:07.176298  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:07.676607  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:08.176977  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:08.676972  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:09.176754  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:09.676315  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:10.176824  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:10.676204  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:11.177281  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:11.676341  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:12.176307  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:12.677058  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:13.176868  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:13.676294  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:14.176196  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:14.676345  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:15.176220  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:15.676507  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:16.177216  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:16.676814  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:17.177128  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:17.676923  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:18.177103  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:18.677241  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:19.176631  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:19.676250  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:20.177039  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:20.676330  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:21.176991  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:21.676979  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:22.176310  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:22.676330  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:23.177072  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:23.676322  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:24.177240  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:24.676323  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:25.176911  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:25.677053  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:26.176471  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:26.676452  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:27.177028  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:27.676317  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:28.176975  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:28.676338  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:29.176379  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:29.676600  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:30.176351  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:30.676375  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:31.177240  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:31.677058  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:32.176843  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:32.676436  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:33.176344  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:33.677269  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:34.176296  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:34.676316  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:35.176823  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:35.676192  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:36.177128  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:36.677155  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:37.176402  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:37.676320  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:38.176310  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:38.677003  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:39.176915  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:39.676966  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:40.176371  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:40.676264  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:41.176771  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:41.676461  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:42.176264  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:42.676335  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:43.177015  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:43.676312  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:44.176383  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:44.676333  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:45.176214  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:45.676348  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:46.177104  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:46.676677  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:50:46.676771  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:50:46.701985  396441 cri.go:89] found id: ""
	I1213 10:50:46.701999  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.702006  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:50:46.702011  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:50:46.702065  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:50:46.727261  396441 cri.go:89] found id: ""
	I1213 10:50:46.727275  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.727282  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:50:46.727287  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:50:46.727352  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:50:46.756930  396441 cri.go:89] found id: ""
	I1213 10:50:46.756944  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.756952  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:50:46.756957  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:50:46.757025  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:50:46.788731  396441 cri.go:89] found id: ""
	I1213 10:50:46.788745  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.788752  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:50:46.788757  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:50:46.788810  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:50:46.816991  396441 cri.go:89] found id: ""
	I1213 10:50:46.817004  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.817012  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:50:46.817017  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:50:46.817072  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:50:46.847482  396441 cri.go:89] found id: ""
	I1213 10:50:46.847498  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.847505  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:50:46.847559  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:50:46.847628  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:50:46.872720  396441 cri.go:89] found id: ""
	I1213 10:50:46.872734  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.872741  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:50:46.872749  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:50:46.872759  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:50:46.942912  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:50:46.942931  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:50:46.971862  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:50:46.971879  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:50:47.038918  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:50:47.038938  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:50:47.053895  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:50:47.053912  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:50:47.119106  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:47.111056   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.111745   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.113325   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.113616   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.115033   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:50:47.111056   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.111745   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.113325   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.113616   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.115033   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:50:49.619370  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:49.629150  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:50:49.629213  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:50:49.658173  396441 cri.go:89] found id: ""
	I1213 10:50:49.658186  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.658194  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:50:49.658199  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:50:49.658256  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:50:49.683401  396441 cri.go:89] found id: ""
	I1213 10:50:49.683414  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.683422  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:50:49.683427  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:50:49.683484  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:50:49.708416  396441 cri.go:89] found id: ""
	I1213 10:50:49.708440  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.708448  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:50:49.708454  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:50:49.708520  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:50:49.737305  396441 cri.go:89] found id: ""
	I1213 10:50:49.737319  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.737326  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:50:49.737331  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:50:49.737385  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:50:49.761415  396441 cri.go:89] found id: ""
	I1213 10:50:49.761431  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.761438  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:50:49.761443  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:50:49.761496  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:50:49.805122  396441 cri.go:89] found id: ""
	I1213 10:50:49.805135  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.805142  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:50:49.805147  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:50:49.805205  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:50:49.846981  396441 cri.go:89] found id: ""
	I1213 10:50:49.846995  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.847002  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:50:49.847010  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:50:49.847020  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:50:49.918064  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:50:49.918084  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:50:49.947649  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:50:49.947666  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:50:50.012059  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:50:50.012084  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:50:50.028985  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:50:50.029010  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:50:50.098147  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:50.089035   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.089498   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.091615   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.092842   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.093753   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:50:50.089035   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.089498   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.091615   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.092842   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.093753   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:50:52.599845  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:52.610036  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:50:52.610095  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:50:52.638582  396441 cri.go:89] found id: ""
	I1213 10:50:52.638597  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.638603  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:50:52.638608  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:50:52.638670  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:50:52.663295  396441 cri.go:89] found id: ""
	I1213 10:50:52.663308  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.663315  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:50:52.663320  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:50:52.663375  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:50:52.689168  396441 cri.go:89] found id: ""
	I1213 10:50:52.689182  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.689189  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:50:52.689194  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:50:52.689253  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:50:52.714589  396441 cri.go:89] found id: ""
	I1213 10:50:52.714602  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.714610  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:50:52.714615  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:50:52.714669  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:50:52.742324  396441 cri.go:89] found id: ""
	I1213 10:50:52.742338  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.742345  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:50:52.742363  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:50:52.742420  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:50:52.778053  396441 cri.go:89] found id: ""
	I1213 10:50:52.778067  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.778074  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:50:52.778079  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:50:52.778138  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:50:52.805632  396441 cri.go:89] found id: ""
	I1213 10:50:52.805646  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.805653  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:50:52.805661  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:50:52.805671  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:50:52.875461  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:50:52.875481  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:50:52.890245  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:50:52.890261  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:50:52.957587  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:52.949597   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.950157   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.951730   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.952367   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.953817   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:50:52.949597   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.950157   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.951730   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.952367   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.953817   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:50:52.957599  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:50:52.957612  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:50:53.025361  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:50:53.025388  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:50:55.556570  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:55.566463  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:50:55.566537  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:50:55.593903  396441 cri.go:89] found id: ""
	I1213 10:50:55.593917  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.593924  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:50:55.593929  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:50:55.593992  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:50:55.619079  396441 cri.go:89] found id: ""
	I1213 10:50:55.619093  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.619101  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:50:55.619106  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:50:55.619162  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:50:55.645916  396441 cri.go:89] found id: ""
	I1213 10:50:55.645931  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.645938  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:50:55.645943  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:50:55.646012  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:50:55.671377  396441 cri.go:89] found id: ""
	I1213 10:50:55.671397  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.671405  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:50:55.671410  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:50:55.671469  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:50:55.697872  396441 cri.go:89] found id: ""
	I1213 10:50:55.697886  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.697894  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:50:55.697917  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:50:55.697976  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:50:55.723576  396441 cri.go:89] found id: ""
	I1213 10:50:55.723589  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.723597  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:50:55.723602  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:50:55.723655  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:50:55.751256  396441 cri.go:89] found id: ""
	I1213 10:50:55.751270  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.751277  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:50:55.751286  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:50:55.751296  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:50:55.821963  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:50:55.821982  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:50:55.836343  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:50:55.836357  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:50:55.903582  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:55.892408   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.895596   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.897286   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.897780   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.899369   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:50:55.892408   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.895596   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.897286   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.897780   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.899369   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:50:55.903594  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:50:55.903605  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:50:55.975012  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:50:55.975037  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:50:58.506699  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:58.517103  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:50:58.517162  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:50:58.542695  396441 cri.go:89] found id: ""
	I1213 10:50:58.542717  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.542725  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:50:58.542730  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:50:58.542787  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:50:58.574075  396441 cri.go:89] found id: ""
	I1213 10:50:58.574089  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.574096  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:50:58.574101  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:50:58.574161  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:50:58.602982  396441 cri.go:89] found id: ""
	I1213 10:50:58.602997  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.603003  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:50:58.603008  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:50:58.603066  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:50:58.628158  396441 cri.go:89] found id: ""
	I1213 10:50:58.628172  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.628179  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:50:58.628185  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:50:58.628241  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:50:58.653050  396441 cri.go:89] found id: ""
	I1213 10:50:58.653064  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.653071  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:50:58.653076  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:50:58.653133  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:50:58.678853  396441 cri.go:89] found id: ""
	I1213 10:50:58.678867  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.678875  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:50:58.678880  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:50:58.678938  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:50:58.704667  396441 cri.go:89] found id: ""
	I1213 10:50:58.704681  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.704689  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:50:58.704696  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:50:58.704706  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:50:58.769708  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:50:58.769731  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:50:58.786197  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:50:58.786214  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:50:58.859562  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:58.850377   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.851009   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.852748   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.853294   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.854974   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:50:58.850377   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.851009   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.852748   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.853294   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.854974   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:50:58.859572  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:50:58.859583  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:50:58.929132  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:50:58.929151  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:01.457488  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:01.467675  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:01.467734  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:01.494648  396441 cri.go:89] found id: ""
	I1213 10:51:01.494662  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.494669  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:01.494675  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:01.494735  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:01.524042  396441 cri.go:89] found id: ""
	I1213 10:51:01.524056  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.524062  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:01.524068  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:01.524130  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:01.550111  396441 cri.go:89] found id: ""
	I1213 10:51:01.550126  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.550133  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:01.550139  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:01.550207  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:01.579191  396441 cri.go:89] found id: ""
	I1213 10:51:01.579205  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.579213  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:01.579218  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:01.579274  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:01.606365  396441 cri.go:89] found id: ""
	I1213 10:51:01.606379  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.606387  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:01.606393  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:01.606456  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:01.632570  396441 cri.go:89] found id: ""
	I1213 10:51:01.632584  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.632593  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:01.632598  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:01.632659  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:01.659645  396441 cri.go:89] found id: ""
	I1213 10:51:01.659663  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.659671  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:01.659683  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:01.659694  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:01.689331  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:01.689348  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:01.754743  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:01.754766  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:01.772787  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:01.772804  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:01.858533  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:01.849677   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.850584   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.852497   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.852896   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.854393   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:01.849677   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.850584   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.852497   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.852896   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.854393   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:01.858545  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:01.858555  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:04.427384  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:04.437715  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:04.437777  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:04.463479  396441 cri.go:89] found id: ""
	I1213 10:51:04.463494  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.463501  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:04.463521  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:04.463580  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:04.491057  396441 cri.go:89] found id: ""
	I1213 10:51:04.491072  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.491079  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:04.491084  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:04.491142  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:04.518458  396441 cri.go:89] found id: ""
	I1213 10:51:04.518471  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.518478  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:04.518483  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:04.518558  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:04.544830  396441 cri.go:89] found id: ""
	I1213 10:51:04.544844  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.544852  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:04.544857  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:04.544915  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:04.571154  396441 cri.go:89] found id: ""
	I1213 10:51:04.571168  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.571177  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:04.571182  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:04.571241  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:04.596261  396441 cri.go:89] found id: ""
	I1213 10:51:04.596275  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.596283  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:04.596288  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:04.596344  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:04.625558  396441 cri.go:89] found id: ""
	I1213 10:51:04.625572  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.625580  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:04.625587  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:04.625598  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:04.656944  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:04.656961  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:04.722740  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:04.722759  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:04.738031  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:04.738051  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:04.817645  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:04.809246   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.810150   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.811791   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.812158   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.813687   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:04.809246   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.810150   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.811791   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.812158   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.813687   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:04.817655  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:04.817669  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:07.391199  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:07.401600  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:07.401657  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:07.427331  396441 cri.go:89] found id: ""
	I1213 10:51:07.427346  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.427353  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:07.427358  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:07.427417  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:07.452053  396441 cri.go:89] found id: ""
	I1213 10:51:07.452067  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.452074  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:07.452079  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:07.452134  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:07.477750  396441 cri.go:89] found id: ""
	I1213 10:51:07.477764  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.477772  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:07.477777  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:07.477836  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:07.506642  396441 cri.go:89] found id: ""
	I1213 10:51:07.506657  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.506664  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:07.506669  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:07.506727  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:07.533730  396441 cri.go:89] found id: ""
	I1213 10:51:07.533744  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.533751  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:07.533757  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:07.533815  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:07.561505  396441 cri.go:89] found id: ""
	I1213 10:51:07.561521  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.561528  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:07.561534  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:07.561587  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:07.586129  396441 cri.go:89] found id: ""
	I1213 10:51:07.586142  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.586149  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:07.586157  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:07.586167  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:07.601150  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:07.601167  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:07.664624  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:07.656633   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.657400   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.659023   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.659321   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.660870   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:07.656633   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.657400   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.659023   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.659321   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.660870   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:07.664636  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:07.664649  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:07.733213  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:07.733233  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:07.762844  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:07.762860  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:10.334136  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:10.344504  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:10.344575  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:10.369562  396441 cri.go:89] found id: ""
	I1213 10:51:10.369575  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.369582  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:10.369587  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:10.369652  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:10.399083  396441 cri.go:89] found id: ""
	I1213 10:51:10.399097  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.399104  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:10.399110  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:10.399166  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:10.425761  396441 cri.go:89] found id: ""
	I1213 10:51:10.425786  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.425794  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:10.425799  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:10.425863  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:10.452658  396441 cri.go:89] found id: ""
	I1213 10:51:10.452672  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.452679  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:10.452685  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:10.452741  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:10.477286  396441 cri.go:89] found id: ""
	I1213 10:51:10.477300  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.477308  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:10.477313  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:10.477375  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:10.502400  396441 cri.go:89] found id: ""
	I1213 10:51:10.502414  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.502421  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:10.502427  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:10.502483  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:10.527113  396441 cri.go:89] found id: ""
	I1213 10:51:10.527127  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.527134  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:10.527142  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:10.527152  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:10.558574  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:10.558590  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:10.623165  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:10.623185  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:10.637513  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:10.637528  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:10.700566  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:10.691507   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.692166   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.694005   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.694639   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.696341   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:10.691507   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.692166   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.694005   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.694639   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.696341   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:10.700576  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:10.700586  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:13.275221  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:13.285371  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:13.285427  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:13.310677  396441 cri.go:89] found id: ""
	I1213 10:51:13.310691  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.310699  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:13.310704  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:13.310766  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:13.339471  396441 cri.go:89] found id: ""
	I1213 10:51:13.339485  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.339493  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:13.339498  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:13.339572  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:13.363772  396441 cri.go:89] found id: ""
	I1213 10:51:13.363787  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.363794  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:13.363799  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:13.363854  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:13.389059  396441 cri.go:89] found id: ""
	I1213 10:51:13.389073  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.389080  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:13.389085  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:13.389140  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:13.414845  396441 cri.go:89] found id: ""
	I1213 10:51:13.414859  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.414866  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:13.414871  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:13.414926  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:13.444040  396441 cri.go:89] found id: ""
	I1213 10:51:13.444054  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.444061  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:13.444066  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:13.444122  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:13.472753  396441 cri.go:89] found id: ""
	I1213 10:51:13.472769  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.472779  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:13.472791  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:13.472806  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:13.487326  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:13.487342  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:13.553218  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:13.543359   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.545061   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.545543   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.547693   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.548343   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:13.543359   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.545061   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.545543   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.547693   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.548343   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:13.553229  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:13.553239  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:13.623642  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:13.623662  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:13.652820  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:13.652836  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:16.219667  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:16.229714  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:16.229774  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:16.256550  396441 cri.go:89] found id: ""
	I1213 10:51:16.256564  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.256571  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:16.256576  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:16.256638  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:16.281266  396441 cri.go:89] found id: ""
	I1213 10:51:16.281280  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.281286  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:16.281292  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:16.281347  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:16.313494  396441 cri.go:89] found id: ""
	I1213 10:51:16.313509  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.313517  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:16.313522  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:16.313580  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:16.338750  396441 cri.go:89] found id: ""
	I1213 10:51:16.338775  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.338783  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:16.338788  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:16.338852  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:16.363883  396441 cri.go:89] found id: ""
	I1213 10:51:16.363898  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.363905  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:16.363910  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:16.363980  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:16.390029  396441 cri.go:89] found id: ""
	I1213 10:51:16.390053  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.390060  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:16.390066  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:16.390123  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:16.415617  396441 cri.go:89] found id: ""
	I1213 10:51:16.415630  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.415637  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:16.415645  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:16.415660  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:16.430631  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:16.430647  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:16.492590  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:16.484588   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.485123   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.486621   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.487162   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.488621   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:16.484588   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.485123   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.486621   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.487162   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.488621   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:16.492603  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:16.492613  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:16.561556  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:16.561578  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:16.589545  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:16.589561  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:19.159792  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:19.170596  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:19.170661  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:19.198953  396441 cri.go:89] found id: ""
	I1213 10:51:19.198967  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.198974  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:19.198979  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:19.199036  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:19.225113  396441 cri.go:89] found id: ""
	I1213 10:51:19.225128  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.225135  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:19.225140  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:19.225195  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:19.250894  396441 cri.go:89] found id: ""
	I1213 10:51:19.250908  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.250916  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:19.250921  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:19.250975  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:19.277076  396441 cri.go:89] found id: ""
	I1213 10:51:19.277091  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.277098  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:19.277103  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:19.277164  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:19.304480  396441 cri.go:89] found id: ""
	I1213 10:51:19.304495  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.304502  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:19.304507  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:19.304567  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:19.330126  396441 cri.go:89] found id: ""
	I1213 10:51:19.330140  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.330147  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:19.330152  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:19.330214  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:19.355882  396441 cri.go:89] found id: ""
	I1213 10:51:19.355896  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.355904  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:19.355912  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:19.355922  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:19.423413  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:19.423435  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:19.457267  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:19.457283  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:19.523500  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:19.523525  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:19.538313  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:19.538329  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:19.607695  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:19.594247   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.594872   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.601540   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.602226   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.603277   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:19.594247   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.594872   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.601540   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.602226   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.603277   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:22.108783  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:22.118887  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:22.118946  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:22.146848  396441 cri.go:89] found id: ""
	I1213 10:51:22.146863  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.146870  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:22.146875  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:22.146929  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:22.173022  396441 cri.go:89] found id: ""
	I1213 10:51:22.173036  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.173049  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:22.173055  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:22.173110  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:22.197674  396441 cri.go:89] found id: ""
	I1213 10:51:22.197687  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.197695  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:22.197700  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:22.197757  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:22.225539  396441 cri.go:89] found id: ""
	I1213 10:51:22.225553  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.225560  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:22.225565  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:22.225624  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:22.253269  396441 cri.go:89] found id: ""
	I1213 10:51:22.253282  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.253290  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:22.253294  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:22.253355  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:22.279157  396441 cri.go:89] found id: ""
	I1213 10:51:22.279172  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.279179  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:22.279184  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:22.279238  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:22.308952  396441 cri.go:89] found id: ""
	I1213 10:51:22.308965  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.308972  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:22.308979  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:22.309000  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:22.323813  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:22.323828  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:22.388544  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:22.379305   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.380377   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.381133   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.382647   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.382971   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:22.379305   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.380377   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.381133   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.382647   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.382971   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:22.388554  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:22.388565  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:22.456639  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:22.456659  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:22.485416  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:22.485432  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:25.052020  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:25.063916  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:25.063975  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:25.100470  396441 cri.go:89] found id: ""
	I1213 10:51:25.100484  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.100492  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:25.100498  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:25.100559  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:25.128317  396441 cri.go:89] found id: ""
	I1213 10:51:25.128331  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.128339  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:25.128344  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:25.128399  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:25.159302  396441 cri.go:89] found id: ""
	I1213 10:51:25.159316  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.159323  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:25.159328  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:25.159386  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:25.186563  396441 cri.go:89] found id: ""
	I1213 10:51:25.186577  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.186591  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:25.186597  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:25.186656  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:25.212652  396441 cri.go:89] found id: ""
	I1213 10:51:25.212666  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.212673  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:25.212678  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:25.212738  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:25.238215  396441 cri.go:89] found id: ""
	I1213 10:51:25.238229  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.238236  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:25.238242  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:25.238314  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:25.264506  396441 cri.go:89] found id: ""
	I1213 10:51:25.264519  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.264526  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:25.264533  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:25.264544  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:25.293035  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:25.293052  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:25.358428  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:25.358448  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:25.373611  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:25.373627  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:25.438267  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:25.430001   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.430492   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.432042   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.432482   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.433912   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:25.430001   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.430492   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.432042   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.432482   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.433912   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:25.438277  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:25.438288  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:28.007912  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:28.020840  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:28.020914  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:28.054985  396441 cri.go:89] found id: ""
	I1213 10:51:28.054999  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.055007  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:28.055012  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:28.055076  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:28.086101  396441 cri.go:89] found id: ""
	I1213 10:51:28.086116  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.086123  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:28.086128  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:28.086184  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:28.114710  396441 cri.go:89] found id: ""
	I1213 10:51:28.114725  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.114732  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:28.114737  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:28.114796  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:28.141803  396441 cri.go:89] found id: ""
	I1213 10:51:28.141817  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.141825  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:28.141831  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:28.141891  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:28.176974  396441 cri.go:89] found id: ""
	I1213 10:51:28.176989  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.176997  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:28.177002  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:28.177063  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:28.202686  396441 cri.go:89] found id: ""
	I1213 10:51:28.202700  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.202707  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:28.202712  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:28.202777  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:28.229573  396441 cri.go:89] found id: ""
	I1213 10:51:28.229587  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.229595  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:28.229604  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:28.229617  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:28.245053  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:28.245070  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:28.314477  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:28.305602   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.306469   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.307980   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.308612   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.310284   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:28.305602   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.306469   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.307980   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.308612   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.310284   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:28.314487  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:28.314513  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:28.382755  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:28.382775  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:28.411608  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:28.411626  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:30.977998  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:30.988313  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:30.988371  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:31.017637  396441 cri.go:89] found id: ""
	I1213 10:51:31.017652  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.017659  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:31.017664  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:31.017739  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:31.051049  396441 cri.go:89] found id: ""
	I1213 10:51:31.051064  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.051071  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:31.051076  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:31.051147  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:31.091994  396441 cri.go:89] found id: ""
	I1213 10:51:31.092012  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.092019  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:31.092025  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:31.092087  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:31.121068  396441 cri.go:89] found id: ""
	I1213 10:51:31.121083  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.121090  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:31.121095  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:31.121154  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:31.148227  396441 cri.go:89] found id: ""
	I1213 10:51:31.148240  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.148248  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:31.148253  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:31.148309  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:31.174904  396441 cri.go:89] found id: ""
	I1213 10:51:31.174919  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.174926  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:31.174932  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:31.174996  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:31.200730  396441 cri.go:89] found id: ""
	I1213 10:51:31.200743  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.200750  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:31.200757  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:31.200768  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:31.215296  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:31.215315  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:31.279266  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:31.270976   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.271649   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.273219   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.273818   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.275412   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:31.270976   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.271649   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.273219   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.273818   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.275412   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:31.279277  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:31.279286  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:31.346253  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:31.346273  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:31.374790  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:31.374805  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:33.942724  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:33.953904  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:33.953965  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:33.979791  396441 cri.go:89] found id: ""
	I1213 10:51:33.979806  396441 logs.go:282] 0 containers: []
	W1213 10:51:33.979813  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:33.979819  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:33.979882  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:34.009113  396441 cri.go:89] found id: ""
	I1213 10:51:34.009129  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.009139  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:34.009145  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:34.009213  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:34.054885  396441 cri.go:89] found id: ""
	I1213 10:51:34.054903  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.054911  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:34.054917  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:34.054978  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:34.087332  396441 cri.go:89] found id: ""
	I1213 10:51:34.087346  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.087354  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:34.087360  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:34.087416  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:34.118541  396441 cri.go:89] found id: ""
	I1213 10:51:34.118556  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.118563  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:34.118568  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:34.118626  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:34.148286  396441 cri.go:89] found id: ""
	I1213 10:51:34.148300  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.148308  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:34.148313  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:34.148368  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:34.174436  396441 cri.go:89] found id: ""
	I1213 10:51:34.174450  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.174457  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:34.174465  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:34.174484  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:34.239233  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:34.239255  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:34.253915  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:34.253932  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:34.319992  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:34.311539   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.312044   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.313591   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.313998   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.315450   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:34.311539   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.312044   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.313591   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.313998   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.315450   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:34.320001  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:34.320011  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:34.387971  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:34.387992  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:36.918587  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:36.930360  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:36.930424  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:36.956712  396441 cri.go:89] found id: ""
	I1213 10:51:36.956726  396441 logs.go:282] 0 containers: []
	W1213 10:51:36.956733  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:36.956738  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:36.956795  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:36.982448  396441 cri.go:89] found id: ""
	I1213 10:51:36.982462  396441 logs.go:282] 0 containers: []
	W1213 10:51:36.982469  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:36.982474  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:36.982541  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:37.014971  396441 cri.go:89] found id: ""
	I1213 10:51:37.014987  396441 logs.go:282] 0 containers: []
	W1213 10:51:37.014994  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:37.015000  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:37.015090  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:37.045960  396441 cri.go:89] found id: ""
	I1213 10:51:37.045974  396441 logs.go:282] 0 containers: []
	W1213 10:51:37.045981  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:37.045987  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:37.046044  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:37.077901  396441 cri.go:89] found id: ""
	I1213 10:51:37.077915  396441 logs.go:282] 0 containers: []
	W1213 10:51:37.077933  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:37.077938  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:37.077995  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:37.105187  396441 cri.go:89] found id: ""
	I1213 10:51:37.105207  396441 logs.go:282] 0 containers: []
	W1213 10:51:37.105214  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:37.105220  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:37.105275  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:37.134077  396441 cri.go:89] found id: ""
	I1213 10:51:37.134102  396441 logs.go:282] 0 containers: []
	W1213 10:51:37.134110  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:37.134118  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:37.134129  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:37.199336  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:37.199355  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:37.213787  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:37.213808  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:37.282802  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:37.274301   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.275006   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.276647   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.277214   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.278711   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:37.274301   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.275006   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.276647   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.277214   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.278711   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:37.282817  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:37.282827  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:37.352930  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:37.352958  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:39.888029  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:39.898120  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:39.898197  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:39.925423  396441 cri.go:89] found id: ""
	I1213 10:51:39.925437  396441 logs.go:282] 0 containers: []
	W1213 10:51:39.925444  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:39.925450  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:39.925510  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:39.951432  396441 cri.go:89] found id: ""
	I1213 10:51:39.951446  396441 logs.go:282] 0 containers: []
	W1213 10:51:39.951454  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:39.951459  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:39.951547  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:39.977216  396441 cri.go:89] found id: ""
	I1213 10:51:39.977231  396441 logs.go:282] 0 containers: []
	W1213 10:51:39.977238  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:39.977244  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:39.977298  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:40.019791  396441 cri.go:89] found id: ""
	I1213 10:51:40.019808  396441 logs.go:282] 0 containers: []
	W1213 10:51:40.019816  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:40.019823  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:40.019900  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:40.051826  396441 cri.go:89] found id: ""
	I1213 10:51:40.051840  396441 logs.go:282] 0 containers: []
	W1213 10:51:40.051847  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:40.051853  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:40.051928  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:40.091165  396441 cri.go:89] found id: ""
	I1213 10:51:40.091192  396441 logs.go:282] 0 containers: []
	W1213 10:51:40.091200  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:40.091206  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:40.091272  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:40.122957  396441 cri.go:89] found id: ""
	I1213 10:51:40.122972  396441 logs.go:282] 0 containers: []
	W1213 10:51:40.122979  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:40.122986  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:40.122998  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:40.186192  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:40.177419   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.178220   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.179932   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.180506   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.182150   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:40.177419   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.178220   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.179932   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.180506   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.182150   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:40.186204  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:40.186214  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:40.252986  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:40.253005  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:40.283019  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:40.283042  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:40.347489  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:40.347521  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:42.863361  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:42.874757  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:42.874824  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:42.899348  396441 cri.go:89] found id: ""
	I1213 10:51:42.899362  396441 logs.go:282] 0 containers: []
	W1213 10:51:42.899370  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:42.899375  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:42.899440  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:42.925079  396441 cri.go:89] found id: ""
	I1213 10:51:42.925092  396441 logs.go:282] 0 containers: []
	W1213 10:51:42.925100  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:42.925105  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:42.925165  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:42.951388  396441 cri.go:89] found id: ""
	I1213 10:51:42.951403  396441 logs.go:282] 0 containers: []
	W1213 10:51:42.951410  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:42.951415  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:42.951470  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:42.977668  396441 cri.go:89] found id: ""
	I1213 10:51:42.977682  396441 logs.go:282] 0 containers: []
	W1213 10:51:42.977688  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:42.977694  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:42.977748  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:43.002136  396441 cri.go:89] found id: ""
	I1213 10:51:43.002150  396441 logs.go:282] 0 containers: []
	W1213 10:51:43.002157  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:43.002162  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:43.002219  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:43.038950  396441 cri.go:89] found id: ""
	I1213 10:51:43.038963  396441 logs.go:282] 0 containers: []
	W1213 10:51:43.038971  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:43.038976  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:43.039033  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:43.071573  396441 cri.go:89] found id: ""
	I1213 10:51:43.071588  396441 logs.go:282] 0 containers: []
	W1213 10:51:43.071595  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:43.071602  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:43.071615  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:43.141998  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:43.142019  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:43.157258  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:43.157274  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:43.224710  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:43.216651   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.217035   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.218535   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.218962   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.220859   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:43.216651   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.217035   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.218535   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.218962   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.220859   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:43.224720  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:43.224731  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:43.294968  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:43.294988  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:45.825007  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:45.835672  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:45.835743  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:45.861353  396441 cri.go:89] found id: ""
	I1213 10:51:45.861375  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.861382  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:45.861388  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:45.861452  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:45.888508  396441 cri.go:89] found id: ""
	I1213 10:51:45.888522  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.888530  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:45.888534  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:45.888594  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:45.915026  396441 cri.go:89] found id: ""
	I1213 10:51:45.915040  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.915049  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:45.915054  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:45.915108  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:45.940299  396441 cri.go:89] found id: ""
	I1213 10:51:45.940313  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.940320  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:45.940325  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:45.940382  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:45.965643  396441 cri.go:89] found id: ""
	I1213 10:51:45.965657  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.965664  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:45.965669  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:45.965722  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:45.992269  396441 cri.go:89] found id: ""
	I1213 10:51:45.992283  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.992290  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:45.992295  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:45.992354  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:46.024907  396441 cri.go:89] found id: ""
	I1213 10:51:46.024922  396441 logs.go:282] 0 containers: []
	W1213 10:51:46.024941  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:46.024950  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:46.024980  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:46.072645  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:46.072664  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:46.144539  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:46.144569  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:46.160047  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:46.160063  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:46.224857  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:46.216357   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.217032   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.218768   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.219308   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.220994   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:46.216357   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.217032   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.218768   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.219308   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.220994   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:46.224867  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:46.224878  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:48.792536  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:48.802577  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:48.802642  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:48.826706  396441 cri.go:89] found id: ""
	I1213 10:51:48.826720  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.826727  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:48.826733  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:48.826787  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:48.851205  396441 cri.go:89] found id: ""
	I1213 10:51:48.851219  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.851226  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:48.851232  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:48.851286  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:48.875646  396441 cri.go:89] found id: ""
	I1213 10:51:48.875661  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.875669  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:48.875674  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:48.875742  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:48.902019  396441 cri.go:89] found id: ""
	I1213 10:51:48.902033  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.902041  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:48.902046  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:48.902102  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:48.926529  396441 cri.go:89] found id: ""
	I1213 10:51:48.926543  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.926550  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:48.926555  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:48.926610  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:48.952549  396441 cri.go:89] found id: ""
	I1213 10:51:48.952563  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.952570  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:48.952576  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:48.952637  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:48.977178  396441 cri.go:89] found id: ""
	I1213 10:51:48.977191  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.977198  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:48.977206  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:48.977218  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:49.044123  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:49.044147  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:49.066217  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:49.066239  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:49.145635  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:49.136657   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.137144   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.139046   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.139577   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.141421   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:49.136657   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.137144   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.139046   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.139577   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.141421   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:49.145645  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:49.145655  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:49.212965  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:49.212984  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:51.744115  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:51.755896  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:51.755984  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:51.790945  396441 cri.go:89] found id: ""
	I1213 10:51:51.790958  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.790965  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:51.790970  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:51.791024  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:51.816688  396441 cri.go:89] found id: ""
	I1213 10:51:51.816702  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.816709  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:51.816715  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:51.816782  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:51.841873  396441 cri.go:89] found id: ""
	I1213 10:51:51.841886  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.841893  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:51.841898  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:51.841955  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:51.867108  396441 cri.go:89] found id: ""
	I1213 10:51:51.867121  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.867129  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:51.867134  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:51.867187  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:51.892370  396441 cri.go:89] found id: ""
	I1213 10:51:51.892383  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.892390  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:51.892395  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:51.892453  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:51.923043  396441 cri.go:89] found id: ""
	I1213 10:51:51.923057  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.923064  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:51.923069  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:51.923159  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:51.948869  396441 cri.go:89] found id: ""
	I1213 10:51:51.948882  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.948889  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:51.948897  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:51.948926  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:52.018383  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:52.006286   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.007111   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.008967   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.009594   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.011259   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:52.006286   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.007111   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.008967   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.009594   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.011259   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:52.018405  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:52.018422  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:52.099342  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:52.099363  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:52.136780  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:52.136795  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:52.202388  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:52.202408  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:54.716950  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:54.726860  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:54.726918  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:54.751377  396441 cri.go:89] found id: ""
	I1213 10:51:54.751389  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.751396  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:54.751401  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:54.751460  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:54.776769  396441 cri.go:89] found id: ""
	I1213 10:51:54.776782  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.776801  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:54.776806  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:54.776871  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:54.806646  396441 cri.go:89] found id: ""
	I1213 10:51:54.806659  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.806666  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:54.806671  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:54.806727  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:54.834243  396441 cri.go:89] found id: ""
	I1213 10:51:54.834256  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.834264  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:54.834269  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:54.834322  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:54.859938  396441 cri.go:89] found id: ""
	I1213 10:51:54.859958  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.859965  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:54.859970  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:54.860025  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:54.886545  396441 cri.go:89] found id: ""
	I1213 10:51:54.886559  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.886565  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:54.886571  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:54.886633  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:54.911784  396441 cri.go:89] found id: ""
	I1213 10:51:54.911798  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.911805  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:54.911812  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:54.911828  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:54.973210  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:54.965415   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.965956   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.967424   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.968013   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.969442   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:54.965415   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.965956   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.967424   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.968013   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.969442   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:54.973220  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:54.973230  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:55.051411  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:55.051430  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:55.085480  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:55.085497  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:55.151220  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:55.151241  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:57.666660  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:57.676624  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:57.676689  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:57.702082  396441 cri.go:89] found id: ""
	I1213 10:51:57.702095  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.702103  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:57.702108  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:57.702171  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:57.727577  396441 cri.go:89] found id: ""
	I1213 10:51:57.727591  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.727598  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:57.727603  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:57.727657  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:57.752756  396441 cri.go:89] found id: ""
	I1213 10:51:57.752770  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.752777  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:57.752782  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:57.752846  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:57.778022  396441 cri.go:89] found id: ""
	I1213 10:51:57.778036  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.778043  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:57.778048  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:57.778108  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:57.803300  396441 cri.go:89] found id: ""
	I1213 10:51:57.803314  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.803321  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:57.803326  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:57.803385  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:57.828374  396441 cri.go:89] found id: ""
	I1213 10:51:57.828389  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.828396  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:57.828402  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:57.828457  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:57.854910  396441 cri.go:89] found id: ""
	I1213 10:51:57.854925  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.854947  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:57.854955  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:57.854965  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:57.919106  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:57.919126  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:57.933832  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:57.933847  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:58.000903  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:57.992995   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.993480   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.994938   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.995239   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.996659   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:57.992995   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.993480   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.994938   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.995239   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.996659   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:58.000914  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:58.000925  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:58.077434  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:58.077453  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:00.612878  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:00.623959  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:00.624026  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:00.653620  396441 cri.go:89] found id: ""
	I1213 10:52:00.653635  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.653642  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:00.653647  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:00.653705  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:00.679802  396441 cri.go:89] found id: ""
	I1213 10:52:00.679818  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.679825  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:00.679830  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:00.679890  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:00.706677  396441 cri.go:89] found id: ""
	I1213 10:52:00.706691  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.706698  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:00.706703  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:00.706759  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:00.734612  396441 cri.go:89] found id: ""
	I1213 10:52:00.734627  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.734634  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:00.734640  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:00.734697  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:00.761763  396441 cri.go:89] found id: ""
	I1213 10:52:00.761777  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.761784  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:00.761790  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:00.761846  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:00.790057  396441 cri.go:89] found id: ""
	I1213 10:52:00.790071  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.790078  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:00.790083  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:00.790140  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:00.816353  396441 cri.go:89] found id: ""
	I1213 10:52:00.816367  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.816374  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:00.816381  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:00.816391  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:00.881315  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:00.881335  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:00.896220  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:00.896239  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:00.961380  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:00.953176   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.953559   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.955115   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.955439   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.957035   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:00.953176   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.953559   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.955115   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.955439   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.957035   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:00.961391  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:00.961401  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:01.031353  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:01.031373  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:03.565879  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:03.575985  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:03.576043  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:03.605780  396441 cri.go:89] found id: ""
	I1213 10:52:03.605794  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.605801  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:03.605807  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:03.605864  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:03.630990  396441 cri.go:89] found id: ""
	I1213 10:52:03.631006  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.631013  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:03.631018  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:03.631073  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:03.658564  396441 cri.go:89] found id: ""
	I1213 10:52:03.658578  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.658585  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:03.658590  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:03.658645  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:03.689093  396441 cri.go:89] found id: ""
	I1213 10:52:03.689108  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.689116  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:03.689121  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:03.689179  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:03.714786  396441 cri.go:89] found id: ""
	I1213 10:52:03.714800  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.714807  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:03.714812  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:03.714870  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:03.741755  396441 cri.go:89] found id: ""
	I1213 10:52:03.741769  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.741777  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:03.741783  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:03.741841  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:03.771487  396441 cri.go:89] found id: ""
	I1213 10:52:03.771502  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.771509  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:03.771538  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:03.771548  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:03.800650  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:03.800666  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:03.866429  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:03.866448  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:03.882243  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:03.882260  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:03.951157  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:03.941996   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.942648   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.944288   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.944871   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.946634   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:03.941996   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.942648   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.944288   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.944871   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.946634   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:03.951167  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:03.951190  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:06.522609  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:06.532880  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:06.532944  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:06.557937  396441 cri.go:89] found id: ""
	I1213 10:52:06.557952  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.557959  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:06.557965  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:06.558020  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:06.588572  396441 cri.go:89] found id: ""
	I1213 10:52:06.588586  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.588595  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:06.588600  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:06.588660  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:06.614455  396441 cri.go:89] found id: ""
	I1213 10:52:06.614468  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.614476  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:06.614481  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:06.614546  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:06.640258  396441 cri.go:89] found id: ""
	I1213 10:52:06.640272  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.640279  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:06.640285  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:06.640341  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:06.666195  396441 cri.go:89] found id: ""
	I1213 10:52:06.666209  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.666216  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:06.666222  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:06.666278  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:06.690768  396441 cri.go:89] found id: ""
	I1213 10:52:06.690781  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.690788  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:06.690793  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:06.690846  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:06.714814  396441 cri.go:89] found id: ""
	I1213 10:52:06.714828  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.714835  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:06.714842  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:06.714852  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:06.779445  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:06.779463  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:06.794405  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:06.794419  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:06.863881  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:06.854615   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.855387   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.857219   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.857866   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.858840   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:06.854615   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.855387   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.857219   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.857866   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.858840   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:06.863893  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:06.863903  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:06.931872  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:06.931893  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:09.461689  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:09.471808  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:09.471866  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:09.498684  396441 cri.go:89] found id: ""
	I1213 10:52:09.498698  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.498705  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:09.498710  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:09.498770  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:09.525226  396441 cri.go:89] found id: ""
	I1213 10:52:09.525240  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.525248  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:09.525253  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:09.525312  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:09.552412  396441 cri.go:89] found id: ""
	I1213 10:52:09.552426  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.552433  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:09.552438  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:09.552496  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:09.581636  396441 cri.go:89] found id: ""
	I1213 10:52:09.581650  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.581657  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:09.581662  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:09.581717  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:09.606899  396441 cri.go:89] found id: ""
	I1213 10:52:09.606913  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.606926  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:09.606931  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:09.606985  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:09.635899  396441 cri.go:89] found id: ""
	I1213 10:52:09.635913  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.635920  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:09.635926  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:09.635990  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:09.660294  396441 cri.go:89] found id: ""
	I1213 10:52:09.660308  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.660315  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:09.660322  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:09.660332  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:09.727938  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:09.727956  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:09.742322  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:09.742337  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:09.806667  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:09.798536   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.798981   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.800481   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.800865   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.802370   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:09.798536   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.798981   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.800481   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.800865   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.802370   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:09.806677  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:09.806688  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:09.873384  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:09.873405  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:12.403419  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:12.413610  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:12.413670  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:12.439264  396441 cri.go:89] found id: ""
	I1213 10:52:12.439277  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.439285  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:12.439290  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:12.439347  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:12.464906  396441 cri.go:89] found id: ""
	I1213 10:52:12.464920  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.464927  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:12.464932  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:12.464988  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:12.498036  396441 cri.go:89] found id: ""
	I1213 10:52:12.498050  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.498057  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:12.498062  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:12.498124  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:12.527408  396441 cri.go:89] found id: ""
	I1213 10:52:12.527424  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.527432  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:12.527437  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:12.527493  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:12.553426  396441 cri.go:89] found id: ""
	I1213 10:52:12.553440  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.553449  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:12.553456  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:12.553512  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:12.577801  396441 cri.go:89] found id: ""
	I1213 10:52:12.577821  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.577829  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:12.577834  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:12.577892  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:12.602596  396441 cri.go:89] found id: ""
	I1213 10:52:12.602610  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.602617  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:12.602625  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:12.602636  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:12.617159  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:12.617175  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:12.679319  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:12.671034   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.671563   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.673241   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.673891   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.675542   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:12.671034   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.671563   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.673241   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.673891   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.675542   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:12.679331  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:12.679344  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:12.750080  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:12.750100  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:12.781595  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:12.781612  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:15.350487  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:15.360659  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:15.360718  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:15.387859  396441 cri.go:89] found id: ""
	I1213 10:52:15.387872  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.387879  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:15.387885  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:15.387938  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:15.414186  396441 cri.go:89] found id: ""
	I1213 10:52:15.414200  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.414207  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:15.414212  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:15.414279  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:15.441078  396441 cri.go:89] found id: ""
	I1213 10:52:15.441093  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.441099  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:15.441105  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:15.441160  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:15.469023  396441 cri.go:89] found id: ""
	I1213 10:52:15.469038  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.469045  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:15.469051  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:15.469107  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:15.497840  396441 cri.go:89] found id: ""
	I1213 10:52:15.497855  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.497862  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:15.497870  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:15.497929  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:15.527216  396441 cri.go:89] found id: ""
	I1213 10:52:15.527240  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.527248  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:15.527253  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:15.527318  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:15.552512  396441 cri.go:89] found id: ""
	I1213 10:52:15.552526  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.552533  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:15.552541  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:15.552551  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:15.566854  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:15.566872  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:15.630069  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:15.622023   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.622578   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.624163   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.624769   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.626104   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:15.622023   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.622578   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.624163   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.624769   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.626104   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:15.630081  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:15.630091  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:15.696860  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:15.696880  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:15.724271  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:15.724287  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:18.289647  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:18.301895  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:18.301952  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:18.337658  396441 cri.go:89] found id: ""
	I1213 10:52:18.337672  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.337679  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:18.337684  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:18.337739  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:18.362954  396441 cri.go:89] found id: ""
	I1213 10:52:18.362968  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.362975  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:18.362980  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:18.363038  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:18.388674  396441 cri.go:89] found id: ""
	I1213 10:52:18.388687  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.388694  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:18.388699  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:18.388759  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:18.420176  396441 cri.go:89] found id: ""
	I1213 10:52:18.420189  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.420196  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:18.420202  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:18.420264  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:18.445491  396441 cri.go:89] found id: ""
	I1213 10:52:18.445505  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.445513  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:18.445518  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:18.445579  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:18.470012  396441 cri.go:89] found id: ""
	I1213 10:52:18.470026  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.470034  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:18.470039  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:18.470097  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:18.495243  396441 cri.go:89] found id: ""
	I1213 10:52:18.495257  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.495264  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:18.495271  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:18.495282  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:18.563479  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:18.563500  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:18.578295  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:18.578311  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:18.646148  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:18.637765   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.638446   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.640058   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.640577   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.642125   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:18.637765   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.638446   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.640058   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.640577   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.642125   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:18.646163  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:18.646174  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:18.718257  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:18.718284  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:21.249994  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:21.259664  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:21.259726  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:21.295330  396441 cri.go:89] found id: ""
	I1213 10:52:21.295344  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.295352  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:21.295359  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:21.295416  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:21.321231  396441 cri.go:89] found id: ""
	I1213 10:52:21.321244  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.321252  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:21.321257  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:21.321315  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:21.352593  396441 cri.go:89] found id: ""
	I1213 10:52:21.352607  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.352615  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:21.352620  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:21.352673  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:21.377931  396441 cri.go:89] found id: ""
	I1213 10:52:21.377946  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.377953  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:21.377959  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:21.378013  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:21.402837  396441 cri.go:89] found id: ""
	I1213 10:52:21.402851  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.402857  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:21.402863  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:21.402917  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:21.431840  396441 cri.go:89] found id: ""
	I1213 10:52:21.431855  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.431862  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:21.431867  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:21.431923  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:21.456743  396441 cri.go:89] found id: ""
	I1213 10:52:21.456757  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.456764  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:21.456772  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:21.456783  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:21.524923  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:21.524943  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:21.539831  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:21.539847  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:21.606862  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:21.598783   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.599644   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.601151   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.601554   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.603029   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:21.598783   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.599644   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.601151   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.601554   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.603029   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:21.606873  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:21.606883  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:21.674639  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:21.674658  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:24.206551  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:24.216405  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:24.216463  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:24.242228  396441 cri.go:89] found id: ""
	I1213 10:52:24.242242  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.242257  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:24.242262  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:24.242323  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:24.267087  396441 cri.go:89] found id: ""
	I1213 10:52:24.267101  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.267108  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:24.267113  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:24.267165  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:24.309002  396441 cri.go:89] found id: ""
	I1213 10:52:24.309015  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.309022  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:24.309027  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:24.309094  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:24.339349  396441 cri.go:89] found id: ""
	I1213 10:52:24.339362  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.339370  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:24.339375  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:24.339432  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:24.368576  396441 cri.go:89] found id: ""
	I1213 10:52:24.368590  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.368597  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:24.368602  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:24.368659  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:24.394642  396441 cri.go:89] found id: ""
	I1213 10:52:24.394656  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.394663  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:24.394669  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:24.394733  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:24.421211  396441 cri.go:89] found id: ""
	I1213 10:52:24.421225  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.421232  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:24.421240  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:24.421250  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:24.487558  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:24.479220   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.479760   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.481451   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.481967   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.483636   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:24.479220   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.479760   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.481451   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.481967   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.483636   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:24.487569  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:24.487579  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:24.558449  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:24.558469  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:24.588318  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:24.588333  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:24.654250  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:24.654270  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:27.169201  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:27.180049  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:27.180109  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:27.206061  396441 cri.go:89] found id: ""
	I1213 10:52:27.206075  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.206082  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:27.206096  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:27.206154  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:27.233191  396441 cri.go:89] found id: ""
	I1213 10:52:27.233205  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.233214  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:27.233219  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:27.233281  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:27.260006  396441 cri.go:89] found id: ""
	I1213 10:52:27.260026  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.260034  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:27.260039  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:27.260097  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:27.297935  396441 cri.go:89] found id: ""
	I1213 10:52:27.297949  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.297956  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:27.297962  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:27.298016  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:27.327550  396441 cri.go:89] found id: ""
	I1213 10:52:27.327564  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.327571  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:27.327576  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:27.327632  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:27.357264  396441 cri.go:89] found id: ""
	I1213 10:52:27.357277  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.357285  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:27.357290  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:27.357345  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:27.386557  396441 cri.go:89] found id: ""
	I1213 10:52:27.386571  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.386579  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:27.386587  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:27.386600  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:27.451879  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:27.451900  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:27.466743  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:27.466762  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:27.534974  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:27.526464   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.527041   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.528790   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.529428   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.530940   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:27.526464   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.527041   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.528790   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.529428   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.530940   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:27.534984  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:27.534996  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:27.603674  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:27.603693  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:30.134007  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:30.145384  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:30.145454  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:30.177035  396441 cri.go:89] found id: ""
	I1213 10:52:30.177050  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.177058  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:30.177063  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:30.177121  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:30.203582  396441 cri.go:89] found id: ""
	I1213 10:52:30.203597  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.203604  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:30.203609  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:30.203689  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:30.230074  396441 cri.go:89] found id: ""
	I1213 10:52:30.230088  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.230106  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:30.230112  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:30.230183  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:30.255406  396441 cri.go:89] found id: ""
	I1213 10:52:30.255431  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.255439  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:30.255445  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:30.255527  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:30.302847  396441 cri.go:89] found id: ""
	I1213 10:52:30.302861  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.302869  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:30.302876  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:30.302931  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:30.345708  396441 cri.go:89] found id: ""
	I1213 10:52:30.345722  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.345730  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:30.345735  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:30.345794  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:30.373285  396441 cri.go:89] found id: ""
	I1213 10:52:30.373298  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.373305  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:30.373313  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:30.373323  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:30.438965  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:30.438984  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:30.453939  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:30.453957  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:30.519205  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:30.509989   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.510631   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.512097   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.512762   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.515602   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:30.509989   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.510631   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.512097   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.512762   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.515602   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:30.519233  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:30.519245  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:30.587307  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:30.587327  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:33.117585  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:33.128213  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:33.128278  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:33.159433  396441 cri.go:89] found id: ""
	I1213 10:52:33.159447  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.159455  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:33.159462  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:33.159561  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:33.188876  396441 cri.go:89] found id: ""
	I1213 10:52:33.188890  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.188898  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:33.188904  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:33.188959  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:33.213013  396441 cri.go:89] found id: ""
	I1213 10:52:33.213026  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.213033  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:33.213038  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:33.213098  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:33.237950  396441 cri.go:89] found id: ""
	I1213 10:52:33.237964  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.237971  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:33.237976  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:33.238030  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:33.262873  396441 cri.go:89] found id: ""
	I1213 10:52:33.262887  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.262894  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:33.262899  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:33.262955  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:33.289230  396441 cri.go:89] found id: ""
	I1213 10:52:33.289243  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.289250  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:33.289256  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:33.289312  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:33.322162  396441 cri.go:89] found id: ""
	I1213 10:52:33.322175  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.322182  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:33.322196  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:33.322206  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:33.350122  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:33.350138  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:33.415463  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:33.415483  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:33.430091  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:33.430108  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:33.492694  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:33.484780   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.485349   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.486880   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.487242   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.488741   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:33.484780   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.485349   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.486880   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.487242   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.488741   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:33.492704  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:33.492713  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:36.059928  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:36.071377  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:36.071452  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:36.097664  396441 cri.go:89] found id: ""
	I1213 10:52:36.097678  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.097685  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:36.097691  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:36.097753  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:36.123266  396441 cri.go:89] found id: ""
	I1213 10:52:36.123280  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.123287  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:36.123292  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:36.123348  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:36.149443  396441 cri.go:89] found id: ""
	I1213 10:52:36.149456  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.149464  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:36.149469  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:36.149525  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:36.174882  396441 cri.go:89] found id: ""
	I1213 10:52:36.174896  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.174903  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:36.174909  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:36.174965  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:36.204325  396441 cri.go:89] found id: ""
	I1213 10:52:36.204348  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.204356  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:36.204362  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:36.204427  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:36.234444  396441 cri.go:89] found id: ""
	I1213 10:52:36.234457  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.234474  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:36.234479  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:36.234550  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:36.259366  396441 cri.go:89] found id: ""
	I1213 10:52:36.259390  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.259397  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:36.259406  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:36.259416  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:36.332816  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:36.332834  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:36.348343  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:36.348362  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:36.412337  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:36.404175   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.404717   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.406173   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.406606   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.408021   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:36.404175   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.404717   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.406173   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.406606   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.408021   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:36.412348  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:36.412358  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:36.480447  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:36.480469  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:39.011418  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:39.022791  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:39.022856  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:39.048926  396441 cri.go:89] found id: ""
	I1213 10:52:39.048939  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.048946  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:39.048951  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:39.049008  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:39.074187  396441 cri.go:89] found id: ""
	I1213 10:52:39.074201  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.074209  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:39.074214  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:39.074274  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:39.099262  396441 cri.go:89] found id: ""
	I1213 10:52:39.099275  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.099282  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:39.099288  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:39.099351  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:39.123854  396441 cri.go:89] found id: ""
	I1213 10:52:39.123868  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.123876  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:39.123881  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:39.123935  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:39.148849  396441 cri.go:89] found id: ""
	I1213 10:52:39.148864  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.148871  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:39.148876  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:39.148937  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:39.178852  396441 cri.go:89] found id: ""
	I1213 10:52:39.178866  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.178873  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:39.178879  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:39.178936  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:39.203878  396441 cri.go:89] found id: ""
	I1213 10:52:39.203892  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.203899  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:39.203907  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:39.203921  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:39.270764  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:39.270783  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:39.286957  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:39.286976  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:39.359682  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:39.351441   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.352404   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.354057   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.354437   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.355940   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:39.351441   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.352404   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.354057   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.354437   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.355940   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:39.359693  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:39.359707  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:39.429853  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:39.429874  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:41.960684  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:41.971667  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:41.971727  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:42.002821  396441 cri.go:89] found id: ""
	I1213 10:52:42.002836  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.002844  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:42.002849  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:42.002914  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:42.045054  396441 cri.go:89] found id: ""
	I1213 10:52:42.045068  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.045075  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:42.045080  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:42.045141  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:42.077836  396441 cri.go:89] found id: ""
	I1213 10:52:42.077852  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.077865  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:42.077871  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:42.077947  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:42.115684  396441 cri.go:89] found id: ""
	I1213 10:52:42.115706  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.115714  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:42.115729  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:42.115828  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:42.147177  396441 cri.go:89] found id: ""
	I1213 10:52:42.147194  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.147202  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:42.147208  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:42.147280  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:42.180144  396441 cri.go:89] found id: ""
	I1213 10:52:42.180165  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.180174  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:42.180181  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:42.180255  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:42.220442  396441 cri.go:89] found id: ""
	I1213 10:52:42.220457  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.220466  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:42.220475  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:42.220486  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:42.297964  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:42.297984  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:42.315552  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:42.315571  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:42.388538  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:42.380217   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.380830   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.382313   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.382956   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.384571   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:42.380217   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.380830   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.382313   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.382956   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.384571   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:42.388548  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:42.388558  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:42.457255  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:42.457276  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:44.987527  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:44.999384  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:44.999443  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:45.050333  396441 cri.go:89] found id: ""
	I1213 10:52:45.050351  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.050366  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:45.050372  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:45.050449  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:45.102093  396441 cri.go:89] found id: ""
	I1213 10:52:45.102110  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.102126  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:45.102132  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:45.102218  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:45.141159  396441 cri.go:89] found id: ""
	I1213 10:52:45.141176  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.141184  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:45.141190  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:45.141265  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:45.181959  396441 cri.go:89] found id: ""
	I1213 10:52:45.181976  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.181994  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:45.182000  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:45.182074  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:45.231005  396441 cri.go:89] found id: ""
	I1213 10:52:45.231020  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.231027  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:45.231033  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:45.231103  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:45.269802  396441 cri.go:89] found id: ""
	I1213 10:52:45.269816  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.269824  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:45.269829  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:45.269906  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:45.302267  396441 cri.go:89] found id: ""
	I1213 10:52:45.302281  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.302289  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:45.302297  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:45.302307  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:45.375709  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:45.375731  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:45.390641  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:45.390662  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:45.456742  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:45.449052   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.449482   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.451067   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.451394   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.452876   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:45.449052   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.449482   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.451067   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.451394   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.452876   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:45.456753  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:45.456763  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:45.525649  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:45.525668  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:48.060311  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:48.071648  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:48.071715  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:48.102851  396441 cri.go:89] found id: ""
	I1213 10:52:48.102865  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.102872  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:48.102878  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:48.102948  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:48.128470  396441 cri.go:89] found id: ""
	I1213 10:52:48.128485  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.128492  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:48.128499  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:48.128556  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:48.155177  396441 cri.go:89] found id: ""
	I1213 10:52:48.155197  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.155205  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:48.155210  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:48.155265  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:48.182358  396441 cri.go:89] found id: ""
	I1213 10:52:48.182373  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.182380  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:48.182385  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:48.182447  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:48.208531  396441 cri.go:89] found id: ""
	I1213 10:52:48.208550  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.208557  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:48.208562  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:48.208616  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:48.234008  396441 cri.go:89] found id: ""
	I1213 10:52:48.234023  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.234031  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:48.234036  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:48.234093  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:48.261447  396441 cri.go:89] found id: ""
	I1213 10:52:48.261461  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.261469  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:48.261480  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:48.261492  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:48.278413  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:48.278429  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:48.358811  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:48.350678   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.351326   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.352876   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.353394   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.354912   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:48.350678   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.351326   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.352876   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.353394   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.354912   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:48.358821  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:48.358832  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:48.433414  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:48.433443  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:48.466431  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:48.466452  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:51.033966  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:51.044258  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:51.044317  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:51.072809  396441 cri.go:89] found id: ""
	I1213 10:52:51.072823  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.072830  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:51.072836  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:51.072895  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:51.102333  396441 cri.go:89] found id: ""
	I1213 10:52:51.102346  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.102353  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:51.102358  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:51.102415  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:51.128414  396441 cri.go:89] found id: ""
	I1213 10:52:51.128427  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.128434  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:51.128439  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:51.128494  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:51.154902  396441 cri.go:89] found id: ""
	I1213 10:52:51.154916  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.154923  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:51.154928  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:51.154983  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:51.182112  396441 cri.go:89] found id: ""
	I1213 10:52:51.182126  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.182133  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:51.182143  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:51.182197  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:51.207919  396441 cri.go:89] found id: ""
	I1213 10:52:51.207933  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.207941  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:51.207946  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:51.208001  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:51.234193  396441 cri.go:89] found id: ""
	I1213 10:52:51.234207  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.234214  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:51.234222  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:51.234238  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:51.303042  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:51.303060  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:51.321366  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:51.321383  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:51.393364  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:51.385234   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.385964   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.387481   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.387938   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.389445   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:51.385234   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.385964   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.387481   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.387938   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.389445   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:51.393375  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:51.393385  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:51.461747  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:51.461768  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:53.992488  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:54.002605  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:54.002667  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:54.037835  396441 cri.go:89] found id: ""
	I1213 10:52:54.037849  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.037857  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:54.037862  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:54.037934  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:54.066982  396441 cri.go:89] found id: ""
	I1213 10:52:54.066998  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.067009  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:54.067015  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:54.067074  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:54.093461  396441 cri.go:89] found id: ""
	I1213 10:52:54.093475  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.093482  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:54.093487  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:54.093544  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:54.123249  396441 cri.go:89] found id: ""
	I1213 10:52:54.123263  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.123271  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:54.123276  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:54.123333  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:54.150103  396441 cri.go:89] found id: ""
	I1213 10:52:54.150116  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.150124  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:54.150130  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:54.150186  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:54.176271  396441 cri.go:89] found id: ""
	I1213 10:52:54.176285  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.176291  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:54.176296  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:54.176355  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:54.204655  396441 cri.go:89] found id: ""
	I1213 10:52:54.204669  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.204676  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:54.204684  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:54.204695  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:54.270252  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:54.259997   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.260697   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.262376   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.262983   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.264572   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:54.259997   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.260697   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.262376   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.262983   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.264572   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:54.270262  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:54.270272  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:54.345996  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:54.346016  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:54.383713  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:54.383730  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:54.450349  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:54.450368  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:56.966888  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:56.976557  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:56.976616  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:57.007803  396441 cri.go:89] found id: ""
	I1213 10:52:57.007828  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.007836  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:57.007842  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:57.007910  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:57.035051  396441 cri.go:89] found id: ""
	I1213 10:52:57.035065  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.035073  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:57.035078  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:57.035137  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:57.060632  396441 cri.go:89] found id: ""
	I1213 10:52:57.060645  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.060652  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:57.060657  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:57.060716  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:57.090660  396441 cri.go:89] found id: ""
	I1213 10:52:57.090674  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.090681  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:57.090686  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:57.090741  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:57.115624  396441 cri.go:89] found id: ""
	I1213 10:52:57.115638  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.115645  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:57.115650  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:57.115718  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:57.146066  396441 cri.go:89] found id: ""
	I1213 10:52:57.146080  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.146087  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:57.146093  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:57.146147  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:57.174574  396441 cri.go:89] found id: ""
	I1213 10:52:57.174589  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.174596  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:57.174604  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:57.174614  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:57.202471  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:57.202487  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:57.267828  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:57.267852  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:57.284906  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:57.284922  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:57.357618  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:57.350279   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.350835   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.351877   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.352319   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.353722   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:57.350279   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.350835   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.351877   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.352319   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.353722   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:57.357629  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:57.357641  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:59.928373  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:59.939417  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:59.939503  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:59.968871  396441 cri.go:89] found id: ""
	I1213 10:52:59.968885  396441 logs.go:282] 0 containers: []
	W1213 10:52:59.968892  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:59.968897  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:59.968952  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:59.994167  396441 cri.go:89] found id: ""
	I1213 10:52:59.994181  396441 logs.go:282] 0 containers: []
	W1213 10:52:59.994188  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:59.994192  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:59.994244  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:00.051356  396441 cri.go:89] found id: ""
	I1213 10:53:00.051372  396441 logs.go:282] 0 containers: []
	W1213 10:53:00.051380  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:00.051386  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:00.051453  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:00.143874  396441 cri.go:89] found id: ""
	I1213 10:53:00.143902  396441 logs.go:282] 0 containers: []
	W1213 10:53:00.143910  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:00.143915  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:00.143990  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:00.245636  396441 cri.go:89] found id: ""
	I1213 10:53:00.245660  396441 logs.go:282] 0 containers: []
	W1213 10:53:00.245669  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:00.245676  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:00.245762  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:00.304351  396441 cri.go:89] found id: ""
	I1213 10:53:00.304370  396441 logs.go:282] 0 containers: []
	W1213 10:53:00.304378  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:00.304384  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:00.304463  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:00.342460  396441 cri.go:89] found id: ""
	I1213 10:53:00.342483  396441 logs.go:282] 0 containers: []
	W1213 10:53:00.342492  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:00.342503  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:00.342552  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:00.422913  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:00.413257   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.414124   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.416191   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.416801   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.418644   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:00.413257   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.414124   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.416191   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.416801   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.418644   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:00.422924  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:00.422935  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:00.494010  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:00.494031  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:00.523384  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:00.523401  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:00.590600  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:00.590620  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:03.105926  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:03.116415  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:03.116476  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:03.148167  396441 cri.go:89] found id: ""
	I1213 10:53:03.148181  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.148189  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:03.148195  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:03.148255  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:03.173610  396441 cri.go:89] found id: ""
	I1213 10:53:03.173624  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.173633  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:03.173638  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:03.173698  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:03.198406  396441 cri.go:89] found id: ""
	I1213 10:53:03.198420  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.198427  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:03.198432  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:03.198494  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:03.228196  396441 cri.go:89] found id: ""
	I1213 10:53:03.228210  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.228218  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:03.228223  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:03.228284  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:03.258506  396441 cri.go:89] found id: ""
	I1213 10:53:03.258539  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.258547  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:03.258552  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:03.258617  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:03.293938  396441 cri.go:89] found id: ""
	I1213 10:53:03.293951  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.293968  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:03.293973  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:03.294029  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:03.322417  396441 cri.go:89] found id: ""
	I1213 10:53:03.322441  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.322448  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:03.322456  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:03.322467  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:03.338484  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:03.338500  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:03.404903  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:03.396282   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.397052   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.398807   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.399322   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.400968   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:03.396282   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.397052   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.398807   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.399322   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.400968   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:03.404913  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:03.404930  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:03.476102  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:03.476122  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:03.508468  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:03.508484  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:06.073576  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:06.084007  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:06.084073  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:06.110819  396441 cri.go:89] found id: ""
	I1213 10:53:06.110834  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.110841  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:06.110847  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:06.110915  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:06.136257  396441 cri.go:89] found id: ""
	I1213 10:53:06.136271  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.136278  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:06.136286  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:06.136344  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:06.162392  396441 cri.go:89] found id: ""
	I1213 10:53:06.162406  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.162413  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:06.162419  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:06.162479  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:06.191163  396441 cri.go:89] found id: ""
	I1213 10:53:06.191178  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.191185  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:06.191190  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:06.191244  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:06.217747  396441 cri.go:89] found id: ""
	I1213 10:53:06.217761  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.217769  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:06.217774  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:06.217829  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:06.242838  396441 cri.go:89] found id: ""
	I1213 10:53:06.242851  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.242858  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:06.242864  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:06.242918  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:06.267811  396441 cri.go:89] found id: ""
	I1213 10:53:06.267831  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.267838  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:06.267846  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:06.267857  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:06.351297  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:06.343103   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.343800   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.345275   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.345736   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.347181   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:06.343103   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.343800   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.345275   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.345736   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.347181   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:06.351310  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:06.351321  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:06.418677  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:06.418696  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:06.456760  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:06.456778  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:06.525341  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:06.525362  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:09.044095  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:09.054348  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:09.054410  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:09.081344  396441 cri.go:89] found id: ""
	I1213 10:53:09.081358  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.081365  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:09.081376  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:09.081434  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:09.107998  396441 cri.go:89] found id: ""
	I1213 10:53:09.108012  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.108019  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:09.108024  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:09.108084  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:09.133582  396441 cri.go:89] found id: ""
	I1213 10:53:09.133596  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.133603  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:09.133608  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:09.133666  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:09.158646  396441 cri.go:89] found id: ""
	I1213 10:53:09.158669  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.158677  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:09.158682  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:09.158746  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:09.184013  396441 cri.go:89] found id: ""
	I1213 10:53:09.184028  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.184035  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:09.184040  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:09.184097  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:09.210338  396441 cri.go:89] found id: ""
	I1213 10:53:09.210352  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.210370  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:09.210376  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:09.210434  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:09.236029  396441 cri.go:89] found id: ""
	I1213 10:53:09.236045  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.236052  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:09.236059  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:09.236069  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:09.310970  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:09.298395   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.303364   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.304232   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.305803   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.306103   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:09.298395   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.303364   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.304232   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.305803   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.306103   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:09.310981  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:09.310992  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:09.380678  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:09.380700  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:09.413354  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:09.413371  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:09.481585  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:09.481603  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:11.996259  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:12.009133  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:12.009217  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:12.044141  396441 cri.go:89] found id: ""
	I1213 10:53:12.044157  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.044164  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:12.044170  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:12.044230  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:12.070547  396441 cri.go:89] found id: ""
	I1213 10:53:12.070579  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.070587  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:12.070598  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:12.070664  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:12.095879  396441 cri.go:89] found id: ""
	I1213 10:53:12.095893  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.095900  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:12.095905  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:12.095965  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:12.125533  396441 cri.go:89] found id: ""
	I1213 10:53:12.125547  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.125554  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:12.125559  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:12.125618  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:12.151281  396441 cri.go:89] found id: ""
	I1213 10:53:12.151303  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.151311  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:12.151317  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:12.151385  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:12.176331  396441 cri.go:89] found id: ""
	I1213 10:53:12.176353  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.176361  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:12.176366  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:12.176433  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:12.202465  396441 cri.go:89] found id: ""
	I1213 10:53:12.202486  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.202493  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:12.202500  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:12.202523  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:12.268244  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:12.268263  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:12.285364  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:12.285379  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:12.357173  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:12.347625   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.348521   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.350379   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.350883   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.352352   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:12.347625   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.348521   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.350379   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.350883   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.352352   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:12.357192  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:12.357204  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:12.424809  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:12.424830  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:14.955688  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:14.967057  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:14.967115  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:14.993136  396441 cri.go:89] found id: ""
	I1213 10:53:14.993150  396441 logs.go:282] 0 containers: []
	W1213 10:53:14.993157  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:14.993163  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:14.993220  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:15.028691  396441 cri.go:89] found id: ""
	I1213 10:53:15.028707  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.028722  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:15.028728  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:15.028794  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:15.056676  396441 cri.go:89] found id: ""
	I1213 10:53:15.056705  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.056732  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:15.056739  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:15.056800  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:15.085199  396441 cri.go:89] found id: ""
	I1213 10:53:15.085213  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.085221  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:15.085226  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:15.085288  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:15.113074  396441 cri.go:89] found id: ""
	I1213 10:53:15.113088  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.113095  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:15.113101  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:15.113159  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:15.142568  396441 cri.go:89] found id: ""
	I1213 10:53:15.142581  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.142589  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:15.142595  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:15.142655  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:15.167430  396441 cri.go:89] found id: ""
	I1213 10:53:15.167443  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.167450  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:15.167458  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:15.167471  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:15.233925  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:15.233946  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:15.248849  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:15.248866  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:15.332377  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:15.324322   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.325030   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.326689   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.327007   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.328464   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:15.324322   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.325030   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.326689   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.327007   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.328464   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:15.332397  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:15.332409  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:15.401263  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:15.401283  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:17.930625  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:17.940643  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:17.940703  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:17.965657  396441 cri.go:89] found id: ""
	I1213 10:53:17.965671  396441 logs.go:282] 0 containers: []
	W1213 10:53:17.965678  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:17.965683  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:17.965740  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:17.990612  396441 cri.go:89] found id: ""
	I1213 10:53:17.990635  396441 logs.go:282] 0 containers: []
	W1213 10:53:17.990642  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:17.990648  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:17.990723  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:18.025034  396441 cri.go:89] found id: ""
	I1213 10:53:18.025049  396441 logs.go:282] 0 containers: []
	W1213 10:53:18.025057  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:18.025063  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:18.025123  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:18.052589  396441 cri.go:89] found id: ""
	I1213 10:53:18.052611  396441 logs.go:282] 0 containers: []
	W1213 10:53:18.052619  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:18.052625  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:18.052683  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:18.079906  396441 cri.go:89] found id: ""
	I1213 10:53:18.079921  396441 logs.go:282] 0 containers: []
	W1213 10:53:18.079929  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:18.079935  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:18.079997  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:18.107302  396441 cri.go:89] found id: ""
	I1213 10:53:18.107327  396441 logs.go:282] 0 containers: []
	W1213 10:53:18.107335  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:18.107340  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:18.107409  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:18.135776  396441 cri.go:89] found id: ""
	I1213 10:53:18.135790  396441 logs.go:282] 0 containers: []
	W1213 10:53:18.135797  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:18.135805  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:18.135815  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:18.153173  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:18.153189  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:18.221544  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:18.213144   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.213793   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.215340   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.215838   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.217560   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:18.213144   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.213793   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.215340   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.215838   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.217560   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:18.221554  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:18.221565  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:18.296047  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:18.296072  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:18.330043  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:18.330063  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:20.909395  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:20.919737  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:20.919799  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:20.946000  396441 cri.go:89] found id: ""
	I1213 10:53:20.946014  396441 logs.go:282] 0 containers: []
	W1213 10:53:20.946022  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:20.946027  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:20.946084  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:20.975734  396441 cri.go:89] found id: ""
	I1213 10:53:20.975749  396441 logs.go:282] 0 containers: []
	W1213 10:53:20.975756  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:20.975761  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:20.975815  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:21.000961  396441 cri.go:89] found id: ""
	I1213 10:53:21.000976  396441 logs.go:282] 0 containers: []
	W1213 10:53:21.000983  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:21.000988  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:21.001043  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:21.027875  396441 cri.go:89] found id: ""
	I1213 10:53:21.027889  396441 logs.go:282] 0 containers: []
	W1213 10:53:21.027896  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:21.027902  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:21.027963  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:21.053113  396441 cri.go:89] found id: ""
	I1213 10:53:21.053127  396441 logs.go:282] 0 containers: []
	W1213 10:53:21.053134  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:21.053140  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:21.053198  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:21.078404  396441 cri.go:89] found id: ""
	I1213 10:53:21.078418  396441 logs.go:282] 0 containers: []
	W1213 10:53:21.078425  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:21.078430  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:21.078484  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:21.103558  396441 cri.go:89] found id: ""
	I1213 10:53:21.103571  396441 logs.go:282] 0 containers: []
	W1213 10:53:21.103579  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:21.103592  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:21.103604  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:21.172527  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:21.172545  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:21.187768  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:21.187785  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:21.256696  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:21.248073   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.249061   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.249753   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.251203   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.251711   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:21.248073   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.249061   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.249753   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.251203   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.251711   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:21.256707  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:21.256717  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:21.327132  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:21.327151  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:23.867087  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:23.877218  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:23.877278  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:23.901809  396441 cri.go:89] found id: ""
	I1213 10:53:23.901824  396441 logs.go:282] 0 containers: []
	W1213 10:53:23.901831  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:23.901836  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:23.901892  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:23.928024  396441 cri.go:89] found id: ""
	I1213 10:53:23.928038  396441 logs.go:282] 0 containers: []
	W1213 10:53:23.928044  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:23.928051  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:23.928104  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:23.953141  396441 cri.go:89] found id: ""
	I1213 10:53:23.953154  396441 logs.go:282] 0 containers: []
	W1213 10:53:23.953161  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:23.953166  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:23.953223  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:23.981670  396441 cri.go:89] found id: ""
	I1213 10:53:23.981684  396441 logs.go:282] 0 containers: []
	W1213 10:53:23.981691  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:23.981696  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:23.981754  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:24.014889  396441 cri.go:89] found id: ""
	I1213 10:53:24.014904  396441 logs.go:282] 0 containers: []
	W1213 10:53:24.014912  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:24.014917  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:24.014982  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:24.041025  396441 cri.go:89] found id: ""
	I1213 10:53:24.041040  396441 logs.go:282] 0 containers: []
	W1213 10:53:24.041047  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:24.041052  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:24.041110  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:24.068555  396441 cri.go:89] found id: ""
	I1213 10:53:24.068570  396441 logs.go:282] 0 containers: []
	W1213 10:53:24.068578  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:24.068586  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:24.068596  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:24.082803  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:24.082819  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:24.145822  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:24.137676   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.138215   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.139944   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.140400   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.141928   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:24.137676   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.138215   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.139944   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.140400   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.141928   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:24.145832  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:24.145843  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:24.213727  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:24.213747  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:24.241111  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:24.241126  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:26.808221  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:26.818590  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:26.818659  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:26.848553  396441 cri.go:89] found id: ""
	I1213 10:53:26.848568  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.848575  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:26.848580  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:26.848636  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:26.878256  396441 cri.go:89] found id: ""
	I1213 10:53:26.878274  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.878281  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:26.878288  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:26.878343  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:26.905040  396441 cri.go:89] found id: ""
	I1213 10:53:26.905054  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.905061  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:26.905067  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:26.905140  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:26.933587  396441 cri.go:89] found id: ""
	I1213 10:53:26.933601  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.933608  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:26.933613  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:26.933669  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:26.958154  396441 cri.go:89] found id: ""
	I1213 10:53:26.958167  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.958175  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:26.958180  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:26.958240  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:26.986142  396441 cri.go:89] found id: ""
	I1213 10:53:26.986156  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.986164  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:26.986169  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:26.986222  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:27.013602  396441 cri.go:89] found id: ""
	I1213 10:53:27.013617  396441 logs.go:282] 0 containers: []
	W1213 10:53:27.013625  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:27.013633  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:27.013643  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:27.080830  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:27.080850  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:27.109824  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:27.109839  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:27.175975  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:27.176002  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:27.190437  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:27.190456  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:27.254921  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:27.245674   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.246416   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.248026   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.248660   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.250260   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:27.245674   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.246416   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.248026   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.248660   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.250260   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:29.755755  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:29.767564  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:29.767645  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:29.797908  396441 cri.go:89] found id: ""
	I1213 10:53:29.797922  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.797929  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:29.797935  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:29.797994  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:29.824494  396441 cri.go:89] found id: ""
	I1213 10:53:29.824508  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.824516  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:29.824521  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:29.824577  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:29.853869  396441 cri.go:89] found id: ""
	I1213 10:53:29.853883  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.853890  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:29.853895  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:29.853951  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:29.883491  396441 cri.go:89] found id: ""
	I1213 10:53:29.883504  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.883526  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:29.883531  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:29.883590  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:29.908921  396441 cri.go:89] found id: ""
	I1213 10:53:29.908935  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.908943  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:29.908948  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:29.909004  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:29.938464  396441 cri.go:89] found id: ""
	I1213 10:53:29.938478  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.938485  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:29.938490  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:29.938568  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:29.964642  396441 cri.go:89] found id: ""
	I1213 10:53:29.964658  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.964665  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:29.964672  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:29.964682  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:30.032663  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:30.032688  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:30.050167  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:30.050188  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:30.119376  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:30.110113   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.110970   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.112364   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.113033   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.114675   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:30.110113   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.110970   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.112364   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.113033   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.114675   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:30.119387  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:30.119398  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:30.188285  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:30.188307  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:32.723464  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:32.734250  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:32.734319  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:32.760154  396441 cri.go:89] found id: ""
	I1213 10:53:32.760168  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.760175  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:32.760180  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:32.760237  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:32.788893  396441 cri.go:89] found id: ""
	I1213 10:53:32.788906  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.788913  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:32.788918  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:32.788973  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:32.815801  396441 cri.go:89] found id: ""
	I1213 10:53:32.815815  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.815822  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:32.815827  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:32.815884  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:32.840740  396441 cri.go:89] found id: ""
	I1213 10:53:32.840754  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.840761  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:32.840766  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:32.840820  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:32.865881  396441 cri.go:89] found id: ""
	I1213 10:53:32.865895  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.865902  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:32.865907  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:32.865962  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:32.891687  396441 cri.go:89] found id: ""
	I1213 10:53:32.891702  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.891709  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:32.891714  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:32.891768  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:32.918219  396441 cri.go:89] found id: ""
	I1213 10:53:32.918233  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.918240  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:32.918248  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:32.918271  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:32.982730  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:32.974018   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.974750   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.976353   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.976815   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.978478   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:32.974018   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.974750   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.976353   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.976815   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.978478   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:32.982749  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:32.982759  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:33.055443  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:33.055464  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:33.092574  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:33.092592  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:33.159246  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:33.159268  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:35.674110  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:35.683841  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:35.683897  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:35.708708  396441 cri.go:89] found id: ""
	I1213 10:53:35.708722  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.708729  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:35.708735  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:35.708792  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:35.733638  396441 cri.go:89] found id: ""
	I1213 10:53:35.733652  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.733659  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:35.733665  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:35.733725  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:35.759232  396441 cri.go:89] found id: ""
	I1213 10:53:35.759246  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.759254  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:35.759259  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:35.759318  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:35.787542  396441 cri.go:89] found id: ""
	I1213 10:53:35.787557  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.787564  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:35.787569  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:35.787625  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:35.811703  396441 cri.go:89] found id: ""
	I1213 10:53:35.811716  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.811724  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:35.811729  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:35.811786  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:35.837035  396441 cri.go:89] found id: ""
	I1213 10:53:35.837049  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.837057  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:35.837062  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:35.837121  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:35.863392  396441 cri.go:89] found id: ""
	I1213 10:53:35.863406  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.863414  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:35.863421  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:35.863431  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:35.928750  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:35.928771  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:35.943680  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:35.943696  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:36.014992  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:36.001506   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.002280   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.004784   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.005213   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.007095   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:36.001506   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.002280   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.004784   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.005213   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.007095   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:36.015006  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:36.015018  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:36.088705  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:36.088726  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:38.618865  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:38.628567  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:38.628627  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:38.657828  396441 cri.go:89] found id: ""
	I1213 10:53:38.657842  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.657853  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:38.657859  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:38.657916  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:38.686067  396441 cri.go:89] found id: ""
	I1213 10:53:38.686081  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.686088  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:38.686093  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:38.686148  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:38.723682  396441 cri.go:89] found id: ""
	I1213 10:53:38.723696  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.723703  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:38.723709  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:38.723764  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:38.749537  396441 cri.go:89] found id: ""
	I1213 10:53:38.749552  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.749559  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:38.749564  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:38.749617  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:38.774109  396441 cri.go:89] found id: ""
	I1213 10:53:38.774129  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.774136  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:38.774141  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:38.774198  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:38.799225  396441 cri.go:89] found id: ""
	I1213 10:53:38.799239  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.799263  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:38.799269  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:38.799323  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:38.828154  396441 cri.go:89] found id: ""
	I1213 10:53:38.828168  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.828176  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:38.828183  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:38.828192  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:38.892547  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:38.892565  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:38.907245  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:38.907267  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:38.971825  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:38.963507   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.964137   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.965780   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.966348   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.968042   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:38.963507   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.964137   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.965780   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.966348   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.968042   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:38.971835  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:38.971847  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:39.041005  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:39.041026  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:41.575691  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:41.585703  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:41.585767  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:41.611468  396441 cri.go:89] found id: ""
	I1213 10:53:41.611482  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.611490  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:41.611495  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:41.611582  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:41.637775  396441 cri.go:89] found id: ""
	I1213 10:53:41.637790  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.637797  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:41.637802  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:41.637865  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:41.666669  396441 cri.go:89] found id: ""
	I1213 10:53:41.666683  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.666691  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:41.666696  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:41.666750  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:41.691305  396441 cri.go:89] found id: ""
	I1213 10:53:41.691328  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.691336  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:41.691341  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:41.691403  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:41.716485  396441 cri.go:89] found id: ""
	I1213 10:53:41.716506  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.716514  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:41.716519  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:41.716576  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:41.745432  396441 cri.go:89] found id: ""
	I1213 10:53:41.745446  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.745453  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:41.745458  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:41.745515  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:41.770118  396441 cri.go:89] found id: ""
	I1213 10:53:41.770131  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.770138  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:41.770156  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:41.770165  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:41.799454  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:41.799470  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:41.863838  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:41.863858  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:41.878805  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:41.878821  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:41.944990  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:41.935691   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.936395   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.938023   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.938699   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.940322   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:41.935691   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.936395   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.938023   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.938699   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.940322   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:41.945000  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:41.945011  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:44.513654  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:44.523863  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:44.523923  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:44.556878  396441 cri.go:89] found id: ""
	I1213 10:53:44.556891  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.556912  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:44.556917  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:44.556984  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:44.592098  396441 cri.go:89] found id: ""
	I1213 10:53:44.592111  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.592128  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:44.592133  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:44.592200  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:44.620862  396441 cri.go:89] found id: ""
	I1213 10:53:44.620875  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.620883  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:44.620898  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:44.620965  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:44.652601  396441 cri.go:89] found id: ""
	I1213 10:53:44.652615  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.652622  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:44.652627  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:44.652683  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:44.678239  396441 cri.go:89] found id: ""
	I1213 10:53:44.678253  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.678269  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:44.678275  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:44.678340  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:44.703917  396441 cri.go:89] found id: ""
	I1213 10:53:44.703930  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.703938  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:44.703943  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:44.704002  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:44.730484  396441 cri.go:89] found id: ""
	I1213 10:53:44.730497  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.730505  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:44.730523  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:44.730538  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:44.744828  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:44.744844  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:44.809441  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:44.801057   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.801582   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.803183   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.803696   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.805516   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:44.801057   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.801582   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.803183   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.803696   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.805516   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:44.809451  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:44.809463  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:44.877771  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:44.877793  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:44.911088  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:44.911103  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:47.481207  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:47.491256  396441 kubeadm.go:602] duration metric: took 4m3.474830683s to restartPrimaryControlPlane
	W1213 10:53:47.491316  396441 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 10:53:47.491392  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 10:53:47.914152  396441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:53:47.926543  396441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:53:47.934327  396441 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:53:47.934378  396441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:53:47.941688  396441 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:53:47.941697  396441 kubeadm.go:158] found existing configuration files:
	
	I1213 10:53:47.941743  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:53:47.949173  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:53:47.949232  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:53:47.956350  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:53:47.963878  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:53:47.963941  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:53:47.971122  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:53:47.978729  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:53:47.978780  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:53:47.985856  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:53:47.993466  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:53:47.993519  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:53:48.001100  396441 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:53:48.045742  396441 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:53:48.045801  396441 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:53:48.119066  396441 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:53:48.119144  396441 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:53:48.119191  396441 kubeadm.go:319] OS: Linux
	I1213 10:53:48.119235  396441 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:53:48.119293  396441 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:53:48.119348  396441 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:53:48.119396  396441 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:53:48.119453  396441 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:53:48.119544  396441 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:53:48.119589  396441 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:53:48.119648  396441 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:53:48.119703  396441 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:53:48.191760  396441 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:53:48.191864  396441 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:53:48.191953  396441 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:53:48.199827  396441 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:53:48.203364  396441 out.go:252]   - Generating certificates and keys ...
	I1213 10:53:48.203457  396441 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:53:48.203575  396441 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:53:48.203646  396441 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:53:48.203710  396441 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:53:48.203925  396441 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:53:48.203983  396441 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:53:48.204042  396441 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:53:48.204098  396441 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:53:48.204167  396441 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:53:48.204241  396441 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:53:48.204278  396441 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:53:48.204329  396441 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:53:48.358581  396441 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:53:48.732777  396441 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:53:49.132208  396441 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:53:49.321084  396441 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:53:49.412268  396441 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:53:49.412908  396441 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:53:49.417021  396441 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:53:49.420254  396441 out.go:252]   - Booting up control plane ...
	I1213 10:53:49.420359  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:53:49.420477  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:53:49.421364  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:53:49.437192  396441 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:53:49.437314  396441 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:53:49.445560  396441 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:53:49.445850  396441 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:53:49.446065  396441 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:53:49.579988  396441 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:53:49.580095  396441 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:57:49.575955  396441 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000564023s
	I1213 10:57:49.575972  396441 kubeadm.go:319] 
	I1213 10:57:49.576025  396441 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:57:49.576055  396441 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:57:49.576153  396441 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:57:49.576156  396441 kubeadm.go:319] 
	I1213 10:57:49.576253  396441 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:57:49.576282  396441 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:57:49.576311  396441 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:57:49.576314  396441 kubeadm.go:319] 
	I1213 10:57:49.584496  396441 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:57:49.584979  396441 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:57:49.585109  396441 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:57:49.585360  396441 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:57:49.585367  396441 kubeadm.go:319] 
	I1213 10:57:49.585449  396441 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 10:57:49.585544  396441 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000564023s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 10:57:49.585636  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 10:57:50.015805  396441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:57:50.030733  396441 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:57:50.030794  396441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:57:50.040503  396441 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:57:50.040514  396441 kubeadm.go:158] found existing configuration files:
	
	I1213 10:57:50.040573  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:57:50.049098  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:57:50.049158  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:57:50.057150  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:57:50.066557  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:57:50.066659  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:57:50.074920  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:57:50.083448  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:57:50.083507  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:57:50.092213  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:57:50.100606  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:57:50.100667  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:57:50.108705  396441 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:57:50.150598  396441 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:57:50.150922  396441 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:57:50.222346  396441 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:57:50.222407  396441 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:57:50.222441  396441 kubeadm.go:319] OS: Linux
	I1213 10:57:50.222482  396441 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:57:50.222526  396441 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:57:50.222570  396441 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:57:50.222621  396441 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:57:50.222666  396441 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:57:50.222718  396441 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:57:50.222760  396441 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:57:50.222804  396441 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:57:50.222847  396441 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:57:50.290176  396441 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:57:50.290279  396441 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:57:50.290370  396441 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:57:50.297738  396441 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:57:50.303127  396441 out.go:252]   - Generating certificates and keys ...
	I1213 10:57:50.303239  396441 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:57:50.303307  396441 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:57:50.303384  396441 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:57:50.303444  396441 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:57:50.303589  396441 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:57:50.303642  396441 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:57:50.303705  396441 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:57:50.303769  396441 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:57:50.303843  396441 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:57:50.303915  396441 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:57:50.303952  396441 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:57:50.304007  396441 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:57:50.552022  396441 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:57:50.900706  396441 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:57:50.944600  396441 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:57:51.426451  396441 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:57:51.746824  396441 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:57:51.747542  396441 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:57:51.750376  396441 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:57:51.753437  396441 out.go:252]   - Booting up control plane ...
	I1213 10:57:51.753548  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:57:51.753629  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:57:51.754233  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:57:51.768926  396441 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:57:51.769192  396441 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:57:51.780537  396441 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:57:51.780629  396441 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:57:51.780668  396441 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:57:51.907080  396441 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:57:51.907187  396441 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:01:51.907939  396441 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001143765s
	I1213 11:01:51.907957  396441 kubeadm.go:319] 
	I1213 11:01:51.908010  396441 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:01:51.908040  396441 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:01:51.908138  396441 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:01:51.908141  396441 kubeadm.go:319] 
	I1213 11:01:51.908238  396441 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:01:51.908267  396441 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:01:51.908295  396441 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:01:51.908298  396441 kubeadm.go:319] 
	I1213 11:01:51.911942  396441 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:01:51.912375  396441 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:01:51.912489  396441 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:01:51.912750  396441 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:01:51.912759  396441 kubeadm.go:319] 
	I1213 11:01:51.912853  396441 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:01:51.912889  396441 kubeadm.go:403] duration metric: took 12m7.937442674s to StartCluster
	I1213 11:01:51.912920  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:01:51.912979  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:01:51.938530  396441 cri.go:89] found id: ""
	I1213 11:01:51.938545  396441 logs.go:282] 0 containers: []
	W1213 11:01:51.938552  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:01:51.938558  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:01:51.938614  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:01:51.963977  396441 cri.go:89] found id: ""
	I1213 11:01:51.963991  396441 logs.go:282] 0 containers: []
	W1213 11:01:51.963998  396441 logs.go:284] No container was found matching "etcd"
	I1213 11:01:51.964003  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:01:51.964062  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:01:51.988936  396441 cri.go:89] found id: ""
	I1213 11:01:51.988951  396441 logs.go:282] 0 containers: []
	W1213 11:01:51.988958  396441 logs.go:284] No container was found matching "coredns"
	I1213 11:01:51.988963  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:01:51.989016  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:01:52.019417  396441 cri.go:89] found id: ""
	I1213 11:01:52.019431  396441 logs.go:282] 0 containers: []
	W1213 11:01:52.019439  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:01:52.019444  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:01:52.019504  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:01:52.046337  396441 cri.go:89] found id: ""
	I1213 11:01:52.046352  396441 logs.go:282] 0 containers: []
	W1213 11:01:52.046360  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:01:52.046365  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:01:52.046426  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:01:52.072247  396441 cri.go:89] found id: ""
	I1213 11:01:52.072261  396441 logs.go:282] 0 containers: []
	W1213 11:01:52.072269  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:01:52.072274  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:01:52.072335  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:01:52.098208  396441 cri.go:89] found id: ""
	I1213 11:01:52.098222  396441 logs.go:282] 0 containers: []
	W1213 11:01:52.098230  396441 logs.go:284] No container was found matching "kindnet"
	I1213 11:01:52.098238  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 11:01:52.098248  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:01:52.165245  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 11:01:52.165265  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:01:52.179908  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:01:52.179924  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:01:52.245950  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:01:52.237532   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.238206   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.239883   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.240475   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.242071   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:01:52.237532   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.238206   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.239883   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.240475   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.242071   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:01:52.245965  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:01:52.245974  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:01:52.322777  396441 logs.go:123] Gathering logs for container status ...
	I1213 11:01:52.322795  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 11:01:52.353497  396441 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001143765s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 11:01:52.353528  396441 out.go:285] * 
	W1213 11:01:52.353591  396441 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001143765s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:01:52.353607  396441 out.go:285] * 
	W1213 11:01:52.355785  396441 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:01:52.362615  396441 out.go:203] 
	W1213 11:01:52.366304  396441 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001143765s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:01:52.366353  396441 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 11:01:52.366376  396441 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 11:01:52.369563  396441 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.43259327Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432628568Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432669931Z" level=info msg="Create NRI interface"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432773423Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432782531Z" level=info msg="runtime interface created"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432793805Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432800656Z" level=info msg="runtime interface starting up..."
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432807844Z" level=info msg="starting plugins..."
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432820907Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432883414Z" level=info msg="No systemd watchdog enabled"
	Dec 13 10:49:42 functional-407525 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.19567159Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=c8401471-cf55-4e91-8c5f-25a7803eeff9 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.1966268Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=72a9b02f-646a-4554-ae9a-9e3da3b7ad0c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.197123888Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=9caf3dbd-ac4b-4ee0-a136-15962b2eeea0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.197584529Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=86fa4638-cc37-45ef-b1b9-31efae43690d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.198007073Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=37f9bdfd-077a-4751-a897-e7c971db1d6b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.198454331Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=f02d4db1-79bc-4d79-9072-497dd5c75d43 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.198871681Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=a0158e10-bee2-405d-9643-45512681023c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.293525942Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3fa6c343-c4b6-41b8-a772-00d9ff9f481b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.294225272Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=f29d3de7-c9c2-4c34-9a76-76647c28c359 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.294692649Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=115a2b32-9e68-43c7-90af-1d4450976368 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.295176544Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=cce5b0a2-af51-4974-8c4f-26d3aadd70cb name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.295829785Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=bba9558c-4301-4576-890b-64bddc5af9b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.296320695Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=59bc3a50-c36c-4024-8506-47dbb78201d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.296784429Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=97458369-23f9-4acf-a127-9b41f30c00a3 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:01:55.876800   21325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:55.877500   21325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:55.879399   21325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:55.879952   21325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:55.881004   21325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	[Dec13 10:24] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:25] overlayfs: idmapped layers are currently not supported
	[  +0.081323] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:31] overlayfs: idmapped layers are currently not supported
	[Dec13 10:32] overlayfs: idmapped layers are currently not supported
	[Dec13 10:42] hrtimer: interrupt took 21684953 ns
	[Dec13 10:49] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 11:01:55 up  2:44,  0 user,  load average: 0.03, 0.15, 0.41
	Linux functional-407525 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 11:01:53 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:01:53 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 13 11:01:53 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:01:53 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:01:53 functional-407525 kubelet[21200]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:01:53 functional-407525 kubelet[21200]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:01:53 functional-407525 kubelet[21200]: E1213 11:01:53.845995   21200 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:01:53 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:01:53 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:01:54 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 964.
	Dec 13 11:01:54 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:01:54 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:01:54 functional-407525 kubelet[21218]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:01:54 functional-407525 kubelet[21218]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:01:54 functional-407525 kubelet[21218]: E1213 11:01:54.579808   21218 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:01:54 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:01:54 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:01:55 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 965.
	Dec 13 11:01:55 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:01:55 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:01:55 functional-407525 kubelet[21241]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:01:55 functional-407525 kubelet[21241]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:01:55 functional-407525 kubelet[21241]: E1213 11:01:55.340780   21241 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:01:55 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:01:55 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525: exit status 2 (316.915487ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-407525" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-407525 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-407525 apply -f testdata/invalidsvc.yaml: exit status 1 (52.234277ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-407525 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (2.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-407525 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-407525 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-407525 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-407525 --alsologtostderr -v=1] stderr:
I1213 11:03:59.501124  413780 out.go:360] Setting OutFile to fd 1 ...
I1213 11:03:59.501250  413780 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 11:03:59.501260  413780 out.go:374] Setting ErrFile to fd 2...
I1213 11:03:59.501264  413780 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 11:03:59.501536  413780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
I1213 11:03:59.501835  413780 mustload.go:66] Loading cluster: functional-407525
I1213 11:03:59.502258  413780 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 11:03:59.502781  413780 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
I1213 11:03:59.520430  413780 host.go:66] Checking if "functional-407525" exists ...
I1213 11:03:59.520754  413780 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 11:03:59.573616  413780 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:03:59.564201499 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 11:03:59.573747  413780 api_server.go:166] Checking apiserver status ...
I1213 11:03:59.573821  413780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1213 11:03:59.573868  413780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
I1213 11:03:59.591720  413780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
W1213 11:03:59.701728  413780 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1213 11:03:59.704944  413780 out.go:179] * The control-plane node functional-407525 apiserver is not running: (state=Stopped)
I1213 11:03:59.707870  413780 out.go:179]   To start a cluster, run: "minikube start -p functional-407525"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-407525
helpers_test.go:244: (dbg) docker inspect functional-407525:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	        "Created": "2025-12-13T10:34:59.162458661Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 385126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:34:59.230276401Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hostname",
	        "HostsPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hosts",
	        "LogPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7-json.log",
	        "Name": "/functional-407525",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-407525:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-407525",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	                "LowerDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-407525",
	                "Source": "/var/lib/docker/volumes/functional-407525/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-407525",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-407525",
	                "name.minikube.sigs.k8s.io": "functional-407525",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb8c72e3de62f4751cebe2c5a489ec3040a7f771c4c912b4414d5eb26c67d8e4",
	            "SandboxKey": "/var/run/docker/netns/fb8c72e3de62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-407525": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:c5:1d:c8:5d:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8bb3fce07852261971da0e26f4e28c90471b6da820443a0b657c0bf09d2f7042",
	                    "EndpointID": "3a907b06ccc449fc18f0cf71710374046514d7011757e3e81bb1c73b267fe8c9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-407525",
	                        "7fc3d6bd328a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525: exit status 2 (582.96486ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-407525 service hello-node --url                                                                                                          │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ ssh       │ functional-407525 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ mount     │ -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1371719630/001:/mount-9p --alsologtostderr -v=1              │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ ssh       │ functional-407525 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ ssh       │ functional-407525 ssh -- ls -la /mount-9p                                                                                                           │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ ssh       │ functional-407525 ssh cat /mount-9p/test-1765623829395310073                                                                                        │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ ssh       │ functional-407525 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ ssh       │ functional-407525 ssh sudo umount -f /mount-9p                                                                                                      │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ mount     │ -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1621853940/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ ssh       │ functional-407525 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ ssh       │ functional-407525 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ ssh       │ functional-407525 ssh -- ls -la /mount-9p                                                                                                           │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ ssh       │ functional-407525 ssh sudo umount -f /mount-9p                                                                                                      │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ mount     │ -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1733713432/001:/mount1 --alsologtostderr -v=1                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ mount     │ -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1733713432/001:/mount3 --alsologtostderr -v=1                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ ssh       │ functional-407525 ssh findmnt -T /mount1                                                                                                            │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ mount     │ -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1733713432/001:/mount2 --alsologtostderr -v=1                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ ssh       │ functional-407525 ssh findmnt -T /mount1                                                                                                            │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ ssh       │ functional-407525 ssh findmnt -T /mount2                                                                                                            │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ ssh       │ functional-407525 ssh findmnt -T /mount3                                                                                                            │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ mount     │ -p functional-407525 --kill=true                                                                                                                    │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ start     │ -p functional-407525 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ start     │ -p functional-407525 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ start     │ -p functional-407525 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-407525 --alsologtostderr -v=1                                                                                      │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:03:59
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:03:59.259655  413709 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:03:59.259777  413709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:03:59.259815  413709 out.go:374] Setting ErrFile to fd 2...
	I1213 11:03:59.259828  413709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:03:59.260495  413709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:03:59.261014  413709 out.go:368] Setting JSON to false
	I1213 11:03:59.261908  413709 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9992,"bootTime":1765613848,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:03:59.262052  413709 start.go:143] virtualization:  
	I1213 11:03:59.265224  413709 out.go:179] * [functional-407525] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:03:59.267327  413709 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:03:59.267393  413709 notify.go:221] Checking for updates...
	I1213 11:03:59.272993  413709 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:03:59.275780  413709 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:03:59.278640  413709 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:03:59.281443  413709 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:03:59.284249  413709 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:03:59.287678  413709 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:03:59.288244  413709 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:03:59.310820  413709 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:03:59.310948  413709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:03:59.373554  413709 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:03:59.36434928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:03:59.373661  413709 docker.go:319] overlay module found
	I1213 11:03:59.376646  413709 out.go:179] * Using the docker driver based on existing profile
	I1213 11:03:59.379458  413709 start.go:309] selected driver: docker
	I1213 11:03:59.379478  413709 start.go:927] validating driver "docker" against &{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:03:59.379619  413709 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:03:59.379724  413709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:03:59.442090  413709 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:03:59.432223878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:03:59.442555  413709 cni.go:84] Creating CNI manager for ""
	I1213 11:03:59.442618  413709 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:03:59.442659  413709 start.go:353] cluster config:
	{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:03:59.445807  413709 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.43259327Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432628568Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432669931Z" level=info msg="Create NRI interface"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432773423Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432782531Z" level=info msg="runtime interface created"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432793805Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432800656Z" level=info msg="runtime interface starting up..."
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432807844Z" level=info msg="starting plugins..."
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432820907Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432883414Z" level=info msg="No systemd watchdog enabled"
	Dec 13 10:49:42 functional-407525 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.19567159Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=c8401471-cf55-4e91-8c5f-25a7803eeff9 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.1966268Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=72a9b02f-646a-4554-ae9a-9e3da3b7ad0c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.197123888Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=9caf3dbd-ac4b-4ee0-a136-15962b2eeea0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.197584529Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=86fa4638-cc37-45ef-b1b9-31efae43690d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.198007073Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=37f9bdfd-077a-4751-a897-e7c971db1d6b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.198454331Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=f02d4db1-79bc-4d79-9072-497dd5c75d43 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.198871681Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=a0158e10-bee2-405d-9643-45512681023c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.293525942Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3fa6c343-c4b6-41b8-a772-00d9ff9f481b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.294225272Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=f29d3de7-c9c2-4c34-9a76-76647c28c359 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.294692649Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=115a2b32-9e68-43c7-90af-1d4450976368 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.295176544Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=cce5b0a2-af51-4974-8c4f-26d3aadd70cb name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.295829785Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=bba9558c-4301-4576-890b-64bddc5af9b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.296320695Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=59bc3a50-c36c-4024-8506-47dbb78201d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.296784429Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=97458369-23f9-4acf-a127-9b41f30c00a3 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:04:01.088620   23399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:04:01.089020   23399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:04:01.090831   23399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:04:01.091327   23399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:04:01.092770   23399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	[Dec13 10:24] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:25] overlayfs: idmapped layers are currently not supported
	[  +0.081323] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:31] overlayfs: idmapped layers are currently not supported
	[Dec13 10:32] overlayfs: idmapped layers are currently not supported
	[Dec13 10:42] hrtimer: interrupt took 21684953 ns
	[Dec13 10:49] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 11:04:01 up  2:46,  0 user,  load average: 0.28, 0.20, 0.40
	Linux functional-407525 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 11:03:58 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:03:59 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1130.
	Dec 13 11:03:59 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:03:59 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:03:59 functional-407525 kubelet[23282]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:03:59 functional-407525 kubelet[23282]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:03:59 functional-407525 kubelet[23282]: E1213 11:03:59.080824   23282 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:03:59 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:03:59 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:03:59 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1131.
	Dec 13 11:03:59 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:03:59 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:03:59 functional-407525 kubelet[23297]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:03:59 functional-407525 kubelet[23297]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:03:59 functional-407525 kubelet[23297]: E1213 11:03:59.815753   23297 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:03:59 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:03:59 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:04:00 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1132.
	Dec 13 11:04:00 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:04:00 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:04:00 functional-407525 kubelet[23317]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:04:00 functional-407525 kubelet[23317]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:04:00 functional-407525 kubelet[23317]: E1213 11:04:00.614672   23317 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:04:00 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:04:00 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525: exit status 2 (347.750663ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-407525" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (2.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-407525 status: exit status 2 (328.63244ms)

                                                
                                                
-- stdout --
	functional-407525
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-407525 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-407525 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (311.898636ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-407525 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-407525 status -o json: exit status 2 (309.458619ms)

                                                
                                                
-- stdout --
	{"Name":"functional-407525","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-407525 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-407525
helpers_test.go:244: (dbg) docker inspect functional-407525:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	        "Created": "2025-12-13T10:34:59.162458661Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 385126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:34:59.230276401Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hostname",
	        "HostsPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hosts",
	        "LogPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7-json.log",
	        "Name": "/functional-407525",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-407525:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-407525",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	                "LowerDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-407525",
	                "Source": "/var/lib/docker/volumes/functional-407525/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-407525",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-407525",
	                "name.minikube.sigs.k8s.io": "functional-407525",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb8c72e3de62f4751cebe2c5a489ec3040a7f771c4c912b4414d5eb26c67d8e4",
	            "SandboxKey": "/var/run/docker/netns/fb8c72e3de62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-407525": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:c5:1d:c8:5d:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8bb3fce07852261971da0e26f4e28c90471b6da820443a0b657c0bf09d2f7042",
	                    "EndpointID": "3a907b06ccc449fc18f0cf71710374046514d7011757e3e81bb1c73b267fe8c9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-407525",
	                        "7fc3d6bd328a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525: exit status 2 (334.90078ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service │ functional-407525 service list                                                                                                                      │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ service │ functional-407525 service list -o json                                                                                                              │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ service │ functional-407525 service --namespace=default --https --url hello-node                                                                              │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ service │ functional-407525 service hello-node --url --format={{.IP}}                                                                                         │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ service │ functional-407525 service hello-node --url                                                                                                          │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ ssh     │ functional-407525 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ mount   │ -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1371719630/001:/mount-9p --alsologtostderr -v=1              │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ ssh     │ functional-407525 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ ssh     │ functional-407525 ssh -- ls -la /mount-9p                                                                                                           │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ ssh     │ functional-407525 ssh cat /mount-9p/test-1765623829395310073                                                                                        │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ ssh     │ functional-407525 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ ssh     │ functional-407525 ssh sudo umount -f /mount-9p                                                                                                      │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ mount   │ -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1621853940/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ ssh     │ functional-407525 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ ssh     │ functional-407525 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ ssh     │ functional-407525 ssh -- ls -la /mount-9p                                                                                                           │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ ssh     │ functional-407525 ssh sudo umount -f /mount-9p                                                                                                      │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ mount   │ -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1733713432/001:/mount1 --alsologtostderr -v=1                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ mount   │ -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1733713432/001:/mount3 --alsologtostderr -v=1                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ ssh     │ functional-407525 ssh findmnt -T /mount1                                                                                                            │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ mount   │ -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1733713432/001:/mount2 --alsologtostderr -v=1                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ ssh     │ functional-407525 ssh findmnt -T /mount1                                                                                                            │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ ssh     │ functional-407525 ssh findmnt -T /mount2                                                                                                            │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ ssh     │ functional-407525 ssh findmnt -T /mount3                                                                                                            │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ mount   │ -p functional-407525 --kill=true                                                                                                                    │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:49:39
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:49:39.014629  396441 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:49:39.014755  396441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:49:39.014760  396441 out.go:374] Setting ErrFile to fd 2...
	I1213 10:49:39.014764  396441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:49:39.015052  396441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:49:39.015432  396441 out.go:368] Setting JSON to false
	I1213 10:49:39.016356  396441 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9131,"bootTime":1765613848,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:49:39.016423  396441 start.go:143] virtualization:  
	I1213 10:49:39.019850  396441 out.go:179] * [functional-407525] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:49:39.022886  396441 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:49:39.022964  396441 notify.go:221] Checking for updates...
	I1213 10:49:39.029514  396441 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:49:39.032457  396441 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:49:39.035302  396441 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 10:49:39.038191  396441 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:49:39.041178  396441 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:49:39.044626  396441 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:49:39.044735  396441 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:49:39.073132  396441 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:49:39.073240  396441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:49:39.131952  396441 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:49:39.12226015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:49:39.132042  396441 docker.go:319] overlay module found
	I1213 10:49:39.135181  396441 out.go:179] * Using the docker driver based on existing profile
	I1213 10:49:39.138004  396441 start.go:309] selected driver: docker
	I1213 10:49:39.138012  396441 start.go:927] validating driver "docker" against &{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:49:39.138117  396441 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:49:39.138218  396441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:49:39.201683  396441 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:49:39.192871513 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:49:39.202106  396441 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:49:39.202131  396441 cni.go:84] Creating CNI manager for ""
	I1213 10:49:39.202182  396441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:49:39.202230  396441 start.go:353] cluster config:
	{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:49:39.205440  396441 out.go:179] * Starting "functional-407525" primary control-plane node in "functional-407525" cluster
	I1213 10:49:39.208563  396441 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:49:39.211465  396441 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:49:39.214245  396441 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:49:39.214282  396441 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:49:39.214290  396441 cache.go:65] Caching tarball of preloaded images
	I1213 10:49:39.214340  396441 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:49:39.214371  396441 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:49:39.214379  396441 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 10:49:39.214508  396441 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/config.json ...
	I1213 10:49:39.233590  396441 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:49:39.233607  396441 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:49:39.233619  396441 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:49:39.233649  396441 start.go:360] acquireMachinesLock for functional-407525: {Name:mkb9a6ddeb0e93e626919e03dc3c989f045e07da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:49:39.233703  396441 start.go:364] duration metric: took 38.187µs to acquireMachinesLock for "functional-407525"
	I1213 10:49:39.233721  396441 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:49:39.233725  396441 fix.go:54] fixHost starting: 
	I1213 10:49:39.234003  396441 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:49:39.250771  396441 fix.go:112] recreateIfNeeded on functional-407525: state=Running err=<nil>
	W1213 10:49:39.250790  396441 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:49:39.253977  396441 out.go:252] * Updating the running docker "functional-407525" container ...
	I1213 10:49:39.254007  396441 machine.go:94] provisionDockerMachine start ...
	I1213 10:49:39.254089  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:39.270672  396441 main.go:143] libmachine: Using SSH client type: native
	I1213 10:49:39.270992  396441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:49:39.270998  396441 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:49:39.419071  396441 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-407525
	
	I1213 10:49:39.419086  396441 ubuntu.go:182] provisioning hostname "functional-407525"
	I1213 10:49:39.419147  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:39.437001  396441 main.go:143] libmachine: Using SSH client type: native
	I1213 10:49:39.437302  396441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:49:39.437311  396441 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-407525 && echo "functional-407525" | sudo tee /etc/hostname
	I1213 10:49:39.596975  396441 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-407525
	
	I1213 10:49:39.597049  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:39.614748  396441 main.go:143] libmachine: Using SSH client type: native
	I1213 10:49:39.615049  396441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:49:39.615063  396441 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-407525' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-407525/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-407525' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:49:39.763894  396441 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:49:39.763910  396441 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 10:49:39.763930  396441 ubuntu.go:190] setting up certificates
	I1213 10:49:39.763939  396441 provision.go:84] configureAuth start
	I1213 10:49:39.763997  396441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-407525
	I1213 10:49:39.782226  396441 provision.go:143] copyHostCerts
	I1213 10:49:39.782297  396441 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 10:49:39.782308  396441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 10:49:39.782382  396441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 10:49:39.782470  396441 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 10:49:39.782473  396441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 10:49:39.782511  396441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 10:49:39.782561  396441 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 10:49:39.782565  396441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 10:49:39.782587  396441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 10:49:39.782630  396441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.functional-407525 san=[127.0.0.1 192.168.49.2 functional-407525 localhost minikube]
	I1213 10:49:40.264423  396441 provision.go:177] copyRemoteCerts
	I1213 10:49:40.264477  396441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:49:40.264518  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:40.288593  396441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:49:40.395503  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:49:40.413777  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:49:40.432071  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 10:49:40.449556  396441 provision.go:87] duration metric: took 685.604236ms to configureAuth
	I1213 10:49:40.449573  396441 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:49:40.449767  396441 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:49:40.449873  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:40.466720  396441 main.go:143] libmachine: Using SSH client type: native
	I1213 10:49:40.467023  396441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:49:40.467036  396441 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:49:40.812989  396441 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:49:40.813002  396441 machine.go:97] duration metric: took 1.558987505s to provisionDockerMachine
	I1213 10:49:40.813012  396441 start.go:293] postStartSetup for "functional-407525" (driver="docker")
	I1213 10:49:40.813024  396441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:49:40.813085  396441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:49:40.813128  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:40.831095  396441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:49:40.935727  396441 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:49:40.939068  396441 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:49:40.939087  396441 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:49:40.939096  396441 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 10:49:40.939151  396441 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 10:49:40.939232  396441 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 10:49:40.939303  396441 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts -> hosts in /etc/test/nested/copy/356328
	I1213 10:49:40.939344  396441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/356328
	I1213 10:49:40.947101  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 10:49:40.964732  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts --> /etc/test/nested/copy/356328/hosts (40 bytes)
	I1213 10:49:40.981668  396441 start.go:296] duration metric: took 168.641746ms for postStartSetup
	I1213 10:49:40.981767  396441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:49:40.981804  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:41.001302  396441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:49:41.104610  396441 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:49:41.109266  396441 fix.go:56] duration metric: took 1.875532342s for fixHost
	I1213 10:49:41.109282  396441 start.go:83] releasing machines lock for "functional-407525", held for 1.875571571s
	I1213 10:49:41.109349  396441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-407525
	I1213 10:49:41.125841  396441 ssh_runner.go:195] Run: cat /version.json
	I1213 10:49:41.125888  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:41.126157  396441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:49:41.126214  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:41.148984  396441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:49:41.157093  396441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:49:41.349053  396441 ssh_runner.go:195] Run: systemctl --version
	I1213 10:49:41.355137  396441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:49:41.394464  396441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:49:41.399282  396441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:49:41.399342  396441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:49:41.407074  396441 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:49:41.407089  396441 start.go:496] detecting cgroup driver to use...
	I1213 10:49:41.407118  396441 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:49:41.407177  396441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:49:41.422248  396441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:49:41.434814  396441 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:49:41.434866  396441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:49:41.450404  396441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:49:41.463493  396441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:49:41.587216  396441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:49:41.708085  396441 docker.go:234] disabling docker service ...
	I1213 10:49:41.708178  396441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:49:41.726011  396441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:49:41.739486  396441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:49:41.858015  396441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:49:41.976835  396441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:49:41.990126  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:49:42.004186  396441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:49:42.004281  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.015561  396441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:49:42.015636  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.026721  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.037311  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.047280  396441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:49:42.056517  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.067880  396441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.078430  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.089815  396441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:49:42.100093  396441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:49:42.110006  396441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:49:42.245156  396441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:49:42.438084  396441 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:49:42.438159  396441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:49:42.442010  396441 start.go:564] Will wait 60s for crictl version
	I1213 10:49:42.442064  396441 ssh_runner.go:195] Run: which crictl
	I1213 10:49:42.445629  396441 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:49:42.469110  396441 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:49:42.469189  396441 ssh_runner.go:195] Run: crio --version
	I1213 10:49:42.498052  396441 ssh_runner.go:195] Run: crio --version
	I1213 10:49:42.536633  396441 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 10:49:42.539603  396441 cli_runner.go:164] Run: docker network inspect functional-407525 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:49:42.571469  396441 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:49:42.578474  396441 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 10:49:42.582400  396441 kubeadm.go:884] updating cluster {Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:49:42.582534  396441 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:49:42.582601  396441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:49:42.622515  396441 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:49:42.622526  396441 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:49:42.622581  396441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:49:42.647505  396441 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:49:42.647532  396441 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:49:42.647540  396441 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 10:49:42.647645  396441 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-407525 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:49:42.647723  396441 ssh_runner.go:195] Run: crio config
	I1213 10:49:42.707356  396441 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 10:49:42.707414  396441 cni.go:84] Creating CNI manager for ""
	I1213 10:49:42.707422  396441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:49:42.707430  396441 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:49:42.707452  396441 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-407525 NodeName:functional-407525 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:49:42.707613  396441 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-407525"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:49:42.707687  396441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:49:42.715307  396441 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:49:42.715378  396441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:49:42.722969  396441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 10:49:42.735593  396441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:49:42.747933  396441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1213 10:49:42.760993  396441 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:49:42.765274  396441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:49:42.881089  396441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:49:43.272837  396441 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525 for IP: 192.168.49.2
	I1213 10:49:43.272850  396441 certs.go:195] generating shared ca certs ...
	I1213 10:49:43.272866  396441 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:49:43.273008  396441 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 10:49:43.273053  396441 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 10:49:43.273060  396441 certs.go:257] generating profile certs ...
	I1213 10:49:43.273166  396441 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key
	I1213 10:49:43.273224  396441 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key.2185ee04
	I1213 10:49:43.273264  396441 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key
	I1213 10:49:43.273384  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 10:49:43.273414  396441 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 10:49:43.273421  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:49:43.273447  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:49:43.273476  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:49:43.273501  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 10:49:43.273543  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 10:49:43.274189  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:49:43.293217  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:49:43.313563  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:49:43.332800  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:49:43.356461  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:49:43.375598  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:49:43.393764  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:49:43.411407  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 10:49:43.429560  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 10:49:43.447014  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:49:43.465017  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 10:49:43.483101  396441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:49:43.496527  396441 ssh_runner.go:195] Run: openssl version
	I1213 10:49:43.502994  396441 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 10:49:43.510763  396441 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 10:49:43.518540  396441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 10:49:43.522603  396441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 10:49:43.522661  396441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 10:49:43.566464  396441 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:49:43.574093  396441 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:49:43.581656  396441 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:49:43.589363  396441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:49:43.593193  396441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:49:43.593258  396441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:49:43.634480  396441 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:49:43.641940  396441 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 10:49:43.649200  396441 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 10:49:43.656832  396441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 10:49:43.660735  396441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 10:49:43.660790  396441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 10:49:43.706761  396441 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:49:43.714203  396441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:49:43.718007  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:49:43.761049  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:49:43.803978  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:49:43.847848  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:49:43.889404  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:49:43.931127  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:49:43.975457  396441 kubeadm.go:401] StartCluster: {Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:49:43.975563  396441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:49:43.975628  396441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:49:44.005477  396441 cri.go:89] found id: ""
	I1213 10:49:44.005555  396441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:49:44.016406  396441 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:49:44.016416  396441 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:49:44.016469  396441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:49:44.028094  396441 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:49:44.028621  396441 kubeconfig.go:125] found "functional-407525" server: "https://192.168.49.2:8441"
	I1213 10:49:44.029882  396441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:49:44.039549  396441 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 10:35:07.660360228 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 10:49:42.756829139 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 10:49:44.039559  396441 kubeadm.go:1161] stopping kube-system containers ...
	I1213 10:49:44.039569  396441 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 10:49:44.039622  396441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:49:44.076693  396441 cri.go:89] found id: ""
	I1213 10:49:44.076751  396441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 10:49:44.096721  396441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:49:44.104663  396441 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 13 10:39 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 10:39 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 13 10:39 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 10:39 /etc/kubernetes/scheduler.conf
	
	I1213 10:49:44.104731  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:49:44.112473  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:49:44.119938  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:49:44.119996  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:49:44.127386  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:49:44.135062  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:49:44.135113  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:49:44.142352  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:49:44.150087  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:49:44.150140  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:49:44.157689  396441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:49:44.166075  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:49:44.211012  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:49:46.340316  396441 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.129279793s)
	I1213 10:49:46.340374  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:49:46.548065  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:49:46.621630  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:49:46.676051  396441 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:49:46.676117  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:47.176335  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:47.676600  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:48.176220  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:48.676514  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:49.177109  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:49.677029  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:50.176294  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:50.676405  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:51.176207  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:51.677115  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:52.176309  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:52.676843  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:53.176518  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:53.677139  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:54.176272  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:54.677116  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:55.176949  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:55.677027  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:56.176855  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:56.677287  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:57.176985  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:57.676291  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:58.176321  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:58.676311  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:59.177074  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:59.676498  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:00.177244  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:00.676377  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:01.176944  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:01.676370  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:02.176565  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:02.676374  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:03.176325  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:03.677205  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:04.177202  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:04.676995  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:05.176541  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:05.676768  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:06.176328  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:06.676318  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:07.176298  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:07.676607  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:08.176977  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:08.676972  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:09.176754  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:09.676315  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:10.176824  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:10.676204  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:11.177281  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:11.676341  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:12.176307  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:12.677058  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:13.176868  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:13.676294  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:14.176196  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:14.676345  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:15.176220  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:15.676507  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:16.177216  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:16.676814  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:17.177128  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:17.676923  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:18.177103  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:18.677241  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:19.176631  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:19.676250  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:20.177039  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:20.676330  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:21.176991  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:21.676979  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:22.176310  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:22.676330  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:23.177072  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:23.676322  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:24.177240  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:24.676323  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:25.176911  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:25.677053  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:26.176471  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:26.676452  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:27.177028  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:27.676317  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:28.176975  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:28.676338  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:29.176379  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:29.676600  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:30.176351  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:30.676375  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:31.177240  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:31.677058  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:32.176843  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:32.676436  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:33.176344  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:33.677269  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:34.176296  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:34.676316  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:35.176823  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:35.676192  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:36.177128  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:36.677155  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:37.176402  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:37.676320  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:38.176310  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:38.677003  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:39.176915  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:39.676966  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:40.176371  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:40.676264  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:41.176771  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:41.676461  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:42.176264  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:42.676335  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:43.177015  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:43.676312  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:44.176383  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:44.676333  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:45.176214  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:45.676348  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:46.177104  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:46.676677  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:50:46.676771  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:50:46.701985  396441 cri.go:89] found id: ""
	I1213 10:50:46.701999  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.702006  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:50:46.702011  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:50:46.702065  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:50:46.727261  396441 cri.go:89] found id: ""
	I1213 10:50:46.727275  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.727282  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:50:46.727287  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:50:46.727352  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:50:46.756930  396441 cri.go:89] found id: ""
	I1213 10:50:46.756944  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.756952  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:50:46.756957  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:50:46.757025  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:50:46.788731  396441 cri.go:89] found id: ""
	I1213 10:50:46.788745  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.788752  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:50:46.788757  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:50:46.788810  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:50:46.816991  396441 cri.go:89] found id: ""
	I1213 10:50:46.817004  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.817012  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:50:46.817017  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:50:46.817072  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:50:46.847482  396441 cri.go:89] found id: ""
	I1213 10:50:46.847498  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.847505  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:50:46.847559  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:50:46.847628  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:50:46.872720  396441 cri.go:89] found id: ""
	I1213 10:50:46.872734  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.872741  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:50:46.872749  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:50:46.872759  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:50:46.942912  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:50:46.942931  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:50:46.971862  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:50:46.971879  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:50:47.038918  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:50:47.038938  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:50:47.053895  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:50:47.053912  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:50:47.119106  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:47.111056   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.111745   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.113325   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.113616   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.115033   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:50:47.111056   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.111745   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.113325   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.113616   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.115033   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:50:49.619370  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:49.629150  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:50:49.629213  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:50:49.658173  396441 cri.go:89] found id: ""
	I1213 10:50:49.658186  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.658194  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:50:49.658199  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:50:49.658256  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:50:49.683401  396441 cri.go:89] found id: ""
	I1213 10:50:49.683414  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.683422  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:50:49.683427  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:50:49.683484  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:50:49.708416  396441 cri.go:89] found id: ""
	I1213 10:50:49.708440  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.708448  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:50:49.708454  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:50:49.708520  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:50:49.737305  396441 cri.go:89] found id: ""
	I1213 10:50:49.737319  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.737326  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:50:49.737331  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:50:49.737385  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:50:49.761415  396441 cri.go:89] found id: ""
	I1213 10:50:49.761431  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.761438  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:50:49.761443  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:50:49.761496  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:50:49.805122  396441 cri.go:89] found id: ""
	I1213 10:50:49.805135  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.805142  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:50:49.805147  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:50:49.805205  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:50:49.846981  396441 cri.go:89] found id: ""
	I1213 10:50:49.846995  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.847002  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:50:49.847010  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:50:49.847020  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:50:49.918064  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:50:49.918084  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:50:49.947649  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:50:49.947666  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:50:50.012059  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:50:50.012084  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:50:50.028985  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:50:50.029010  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:50:50.098147  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:50.089035   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.089498   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.091615   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.092842   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.093753   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:50:50.089035   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.089498   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.091615   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.092842   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.093753   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:50:52.599845  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:52.610036  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:50:52.610095  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:50:52.638582  396441 cri.go:89] found id: ""
	I1213 10:50:52.638597  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.638603  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:50:52.638608  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:50:52.638670  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:50:52.663295  396441 cri.go:89] found id: ""
	I1213 10:50:52.663308  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.663315  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:50:52.663320  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:50:52.663375  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:50:52.689168  396441 cri.go:89] found id: ""
	I1213 10:50:52.689182  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.689189  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:50:52.689194  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:50:52.689253  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:50:52.714589  396441 cri.go:89] found id: ""
	I1213 10:50:52.714602  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.714610  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:50:52.714615  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:50:52.714669  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:50:52.742324  396441 cri.go:89] found id: ""
	I1213 10:50:52.742338  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.742345  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:50:52.742363  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:50:52.742420  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:50:52.778053  396441 cri.go:89] found id: ""
	I1213 10:50:52.778067  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.778074  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:50:52.778079  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:50:52.778138  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:50:52.805632  396441 cri.go:89] found id: ""
	I1213 10:50:52.805646  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.805653  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:50:52.805661  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:50:52.805671  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:50:52.875461  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:50:52.875481  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:50:52.890245  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:50:52.890261  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:50:52.957587  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:52.949597   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.950157   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.951730   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.952367   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.953817   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:50:52.949597   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.950157   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.951730   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.952367   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.953817   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:50:52.957599  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:50:52.957612  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:50:53.025361  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:50:53.025388  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:50:55.556570  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:55.566463  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:50:55.566537  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:50:55.593903  396441 cri.go:89] found id: ""
	I1213 10:50:55.593917  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.593924  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:50:55.593929  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:50:55.593992  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:50:55.619079  396441 cri.go:89] found id: ""
	I1213 10:50:55.619093  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.619101  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:50:55.619106  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:50:55.619162  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:50:55.645916  396441 cri.go:89] found id: ""
	I1213 10:50:55.645931  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.645938  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:50:55.645943  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:50:55.646012  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:50:55.671377  396441 cri.go:89] found id: ""
	I1213 10:50:55.671397  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.671405  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:50:55.671410  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:50:55.671469  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:50:55.697872  396441 cri.go:89] found id: ""
	I1213 10:50:55.697886  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.697894  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:50:55.697917  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:50:55.697976  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:50:55.723576  396441 cri.go:89] found id: ""
	I1213 10:50:55.723589  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.723597  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:50:55.723602  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:50:55.723655  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:50:55.751256  396441 cri.go:89] found id: ""
	I1213 10:50:55.751270  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.751277  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:50:55.751286  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:50:55.751296  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:50:55.821963  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:50:55.821982  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:50:55.836343  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:50:55.836357  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:50:55.903582  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:55.892408   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.895596   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.897286   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.897780   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.899369   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:50:55.892408   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.895596   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.897286   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.897780   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.899369   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:50:55.903594  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:50:55.903605  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:50:55.975012  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:50:55.975037  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:50:58.506699  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:58.517103  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:50:58.517162  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:50:58.542695  396441 cri.go:89] found id: ""
	I1213 10:50:58.542717  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.542725  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:50:58.542730  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:50:58.542787  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:50:58.574075  396441 cri.go:89] found id: ""
	I1213 10:50:58.574089  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.574096  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:50:58.574101  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:50:58.574161  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:50:58.602982  396441 cri.go:89] found id: ""
	I1213 10:50:58.602997  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.603003  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:50:58.603008  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:50:58.603066  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:50:58.628158  396441 cri.go:89] found id: ""
	I1213 10:50:58.628172  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.628179  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:50:58.628185  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:50:58.628241  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:50:58.653050  396441 cri.go:89] found id: ""
	I1213 10:50:58.653064  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.653071  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:50:58.653076  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:50:58.653133  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:50:58.678853  396441 cri.go:89] found id: ""
	I1213 10:50:58.678867  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.678875  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:50:58.678880  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:50:58.678938  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:50:58.704667  396441 cri.go:89] found id: ""
	I1213 10:50:58.704681  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.704689  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:50:58.704696  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:50:58.704706  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:50:58.769708  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:50:58.769731  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:50:58.786197  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:50:58.786214  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:50:58.859562  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:58.850377   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.851009   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.852748   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.853294   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.854974   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:50:58.850377   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.851009   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.852748   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.853294   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.854974   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:50:58.859572  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:50:58.859583  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:50:58.929132  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:50:58.929151  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:01.457488  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:01.467675  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:01.467734  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:01.494648  396441 cri.go:89] found id: ""
	I1213 10:51:01.494662  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.494669  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:01.494675  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:01.494735  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:01.524042  396441 cri.go:89] found id: ""
	I1213 10:51:01.524056  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.524062  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:01.524068  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:01.524130  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:01.550111  396441 cri.go:89] found id: ""
	I1213 10:51:01.550126  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.550133  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:01.550139  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:01.550207  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:01.579191  396441 cri.go:89] found id: ""
	I1213 10:51:01.579205  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.579213  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:01.579218  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:01.579274  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:01.606365  396441 cri.go:89] found id: ""
	I1213 10:51:01.606379  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.606387  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:01.606393  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:01.606456  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:01.632570  396441 cri.go:89] found id: ""
	I1213 10:51:01.632584  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.632593  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:01.632598  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:01.632659  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:01.659645  396441 cri.go:89] found id: ""
	I1213 10:51:01.659663  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.659671  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:01.659683  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:01.659694  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:01.689331  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:01.689348  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:01.754743  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:01.754766  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:01.772787  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:01.772804  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:01.858533  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:01.849677   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.850584   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.852497   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.852896   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.854393   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:01.849677   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.850584   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.852497   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.852896   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.854393   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:01.858545  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:01.858555  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:04.427384  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:04.437715  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:04.437777  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:04.463479  396441 cri.go:89] found id: ""
	I1213 10:51:04.463494  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.463501  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:04.463521  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:04.463580  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:04.491057  396441 cri.go:89] found id: ""
	I1213 10:51:04.491072  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.491079  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:04.491084  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:04.491142  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:04.518458  396441 cri.go:89] found id: ""
	I1213 10:51:04.518471  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.518478  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:04.518483  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:04.518558  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:04.544830  396441 cri.go:89] found id: ""
	I1213 10:51:04.544844  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.544852  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:04.544857  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:04.544915  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:04.571154  396441 cri.go:89] found id: ""
	I1213 10:51:04.571168  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.571177  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:04.571182  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:04.571241  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:04.596261  396441 cri.go:89] found id: ""
	I1213 10:51:04.596275  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.596283  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:04.596288  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:04.596344  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:04.625558  396441 cri.go:89] found id: ""
	I1213 10:51:04.625572  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.625580  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:04.625587  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:04.625598  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:04.656944  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:04.656961  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:04.722740  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:04.722759  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:04.738031  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:04.738051  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:04.817645  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:04.809246   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.810150   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.811791   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.812158   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.813687   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:04.809246   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.810150   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.811791   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.812158   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.813687   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:04.817655  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:04.817669  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:07.391199  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:07.401600  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:07.401657  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:07.427331  396441 cri.go:89] found id: ""
	I1213 10:51:07.427346  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.427353  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:07.427358  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:07.427417  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:07.452053  396441 cri.go:89] found id: ""
	I1213 10:51:07.452067  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.452074  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:07.452079  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:07.452134  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:07.477750  396441 cri.go:89] found id: ""
	I1213 10:51:07.477764  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.477772  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:07.477777  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:07.477836  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:07.506642  396441 cri.go:89] found id: ""
	I1213 10:51:07.506657  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.506664  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:07.506669  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:07.506727  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:07.533730  396441 cri.go:89] found id: ""
	I1213 10:51:07.533744  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.533751  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:07.533757  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:07.533815  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:07.561505  396441 cri.go:89] found id: ""
	I1213 10:51:07.561521  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.561528  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:07.561534  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:07.561587  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:07.586129  396441 cri.go:89] found id: ""
	I1213 10:51:07.586142  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.586149  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:07.586157  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:07.586167  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:07.601150  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:07.601167  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:07.664624  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:07.656633   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.657400   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.659023   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.659321   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.660870   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:07.656633   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.657400   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.659023   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.659321   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.660870   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:07.664636  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:07.664649  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:07.733213  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:07.733233  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:07.762844  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:07.762860  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:10.334136  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:10.344504  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:10.344575  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:10.369562  396441 cri.go:89] found id: ""
	I1213 10:51:10.369575  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.369582  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:10.369587  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:10.369652  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:10.399083  396441 cri.go:89] found id: ""
	I1213 10:51:10.399097  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.399104  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:10.399110  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:10.399166  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:10.425761  396441 cri.go:89] found id: ""
	I1213 10:51:10.425786  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.425794  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:10.425799  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:10.425863  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:10.452658  396441 cri.go:89] found id: ""
	I1213 10:51:10.452672  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.452679  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:10.452685  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:10.452741  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:10.477286  396441 cri.go:89] found id: ""
	I1213 10:51:10.477300  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.477308  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:10.477313  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:10.477375  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:10.502400  396441 cri.go:89] found id: ""
	I1213 10:51:10.502414  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.502421  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:10.502427  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:10.502483  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:10.527113  396441 cri.go:89] found id: ""
	I1213 10:51:10.527127  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.527134  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:10.527142  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:10.527152  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:10.558574  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:10.558590  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:10.623165  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:10.623185  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:10.637513  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:10.637528  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:10.700566  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:10.691507   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.692166   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.694005   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.694639   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.696341   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:10.691507   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.692166   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.694005   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.694639   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.696341   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:10.700576  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:10.700586  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:13.275221  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:13.285371  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:13.285427  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:13.310677  396441 cri.go:89] found id: ""
	I1213 10:51:13.310691  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.310699  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:13.310704  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:13.310766  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:13.339471  396441 cri.go:89] found id: ""
	I1213 10:51:13.339485  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.339493  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:13.339498  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:13.339572  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:13.363772  396441 cri.go:89] found id: ""
	I1213 10:51:13.363787  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.363794  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:13.363799  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:13.363854  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:13.389059  396441 cri.go:89] found id: ""
	I1213 10:51:13.389073  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.389080  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:13.389085  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:13.389140  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:13.414845  396441 cri.go:89] found id: ""
	I1213 10:51:13.414859  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.414866  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:13.414871  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:13.414926  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:13.444040  396441 cri.go:89] found id: ""
	I1213 10:51:13.444054  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.444061  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:13.444066  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:13.444122  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:13.472753  396441 cri.go:89] found id: ""
	I1213 10:51:13.472769  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.472779  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:13.472791  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:13.472806  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:13.487326  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:13.487342  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:13.553218  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:13.543359   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.545061   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.545543   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.547693   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.548343   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:13.543359   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.545061   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.545543   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.547693   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.548343   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:13.553229  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:13.553239  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:13.623642  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:13.623662  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:13.652820  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:13.652836  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:16.219667  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:16.229714  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:16.229774  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:16.256550  396441 cri.go:89] found id: ""
	I1213 10:51:16.256564  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.256571  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:16.256576  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:16.256638  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:16.281266  396441 cri.go:89] found id: ""
	I1213 10:51:16.281280  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.281286  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:16.281292  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:16.281347  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:16.313494  396441 cri.go:89] found id: ""
	I1213 10:51:16.313509  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.313517  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:16.313522  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:16.313580  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:16.338750  396441 cri.go:89] found id: ""
	I1213 10:51:16.338775  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.338783  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:16.338788  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:16.338852  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:16.363883  396441 cri.go:89] found id: ""
	I1213 10:51:16.363898  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.363905  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:16.363910  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:16.363980  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:16.390029  396441 cri.go:89] found id: ""
	I1213 10:51:16.390053  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.390060  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:16.390066  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:16.390123  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:16.415617  396441 cri.go:89] found id: ""
	I1213 10:51:16.415630  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.415637  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:16.415645  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:16.415660  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:16.430631  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:16.430647  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:16.492590  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:16.484588   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.485123   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.486621   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.487162   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.488621   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:16.484588   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.485123   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.486621   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.487162   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.488621   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:16.492603  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:16.492613  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:16.561556  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:16.561578  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:16.589545  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:16.589561  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:19.159792  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:19.170596  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:19.170661  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:19.198953  396441 cri.go:89] found id: ""
	I1213 10:51:19.198967  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.198974  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:19.198979  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:19.199036  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:19.225113  396441 cri.go:89] found id: ""
	I1213 10:51:19.225128  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.225135  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:19.225140  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:19.225195  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:19.250894  396441 cri.go:89] found id: ""
	I1213 10:51:19.250908  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.250916  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:19.250921  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:19.250975  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:19.277076  396441 cri.go:89] found id: ""
	I1213 10:51:19.277091  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.277098  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:19.277103  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:19.277164  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:19.304480  396441 cri.go:89] found id: ""
	I1213 10:51:19.304495  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.304502  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:19.304507  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:19.304567  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:19.330126  396441 cri.go:89] found id: ""
	I1213 10:51:19.330140  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.330147  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:19.330152  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:19.330214  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:19.355882  396441 cri.go:89] found id: ""
	I1213 10:51:19.355896  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.355904  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:19.355912  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:19.355922  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:19.423413  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:19.423435  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:19.457267  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:19.457283  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:19.523500  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:19.523525  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:19.538313  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:19.538329  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:19.607695  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:19.594247   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.594872   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.601540   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.602226   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.603277   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:19.594247   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.594872   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.601540   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.602226   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.603277   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:22.108783  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:22.118887  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:22.118946  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:22.146848  396441 cri.go:89] found id: ""
	I1213 10:51:22.146863  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.146870  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:22.146875  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:22.146929  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:22.173022  396441 cri.go:89] found id: ""
	I1213 10:51:22.173036  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.173049  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:22.173055  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:22.173110  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:22.197674  396441 cri.go:89] found id: ""
	I1213 10:51:22.197687  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.197695  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:22.197700  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:22.197757  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:22.225539  396441 cri.go:89] found id: ""
	I1213 10:51:22.225553  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.225560  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:22.225565  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:22.225624  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:22.253269  396441 cri.go:89] found id: ""
	I1213 10:51:22.253282  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.253290  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:22.253294  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:22.253355  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:22.279157  396441 cri.go:89] found id: ""
	I1213 10:51:22.279172  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.279179  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:22.279184  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:22.279238  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:22.308952  396441 cri.go:89] found id: ""
	I1213 10:51:22.308965  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.308972  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:22.308979  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:22.309000  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:22.323813  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:22.323828  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:22.388544  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:22.379305   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.380377   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.381133   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.382647   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.382971   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:22.379305   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.380377   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.381133   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.382647   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.382971   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:22.388554  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:22.388565  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:22.456639  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:22.456659  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:22.485416  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:22.485432  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:25.052020  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:25.063916  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:25.063975  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:25.100470  396441 cri.go:89] found id: ""
	I1213 10:51:25.100484  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.100492  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:25.100498  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:25.100559  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:25.128317  396441 cri.go:89] found id: ""
	I1213 10:51:25.128331  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.128339  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:25.128344  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:25.128399  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:25.159302  396441 cri.go:89] found id: ""
	I1213 10:51:25.159316  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.159323  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:25.159328  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:25.159386  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:25.186563  396441 cri.go:89] found id: ""
	I1213 10:51:25.186577  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.186591  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:25.186597  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:25.186656  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:25.212652  396441 cri.go:89] found id: ""
	I1213 10:51:25.212666  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.212673  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:25.212678  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:25.212738  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:25.238215  396441 cri.go:89] found id: ""
	I1213 10:51:25.238229  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.238236  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:25.238242  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:25.238314  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:25.264506  396441 cri.go:89] found id: ""
	I1213 10:51:25.264519  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.264526  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:25.264533  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:25.264544  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:25.293035  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:25.293052  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:25.358428  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:25.358448  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:25.373611  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:25.373627  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:25.438267  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:25.430001   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.430492   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.432042   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.432482   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.433912   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:25.430001   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.430492   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.432042   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.432482   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.433912   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:25.438277  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:25.438288  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:28.007912  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:28.020840  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:28.020914  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:28.054985  396441 cri.go:89] found id: ""
	I1213 10:51:28.054999  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.055007  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:28.055012  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:28.055076  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:28.086101  396441 cri.go:89] found id: ""
	I1213 10:51:28.086116  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.086123  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:28.086128  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:28.086184  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:28.114710  396441 cri.go:89] found id: ""
	I1213 10:51:28.114725  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.114732  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:28.114737  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:28.114796  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:28.141803  396441 cri.go:89] found id: ""
	I1213 10:51:28.141817  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.141825  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:28.141831  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:28.141891  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:28.176974  396441 cri.go:89] found id: ""
	I1213 10:51:28.176989  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.176997  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:28.177002  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:28.177063  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:28.202686  396441 cri.go:89] found id: ""
	I1213 10:51:28.202700  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.202707  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:28.202712  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:28.202777  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:28.229573  396441 cri.go:89] found id: ""
	I1213 10:51:28.229587  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.229595  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:28.229604  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:28.229617  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:28.245053  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:28.245070  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:28.314477  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:28.305602   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.306469   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.307980   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.308612   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.310284   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:28.305602   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.306469   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.307980   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.308612   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.310284   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:28.314487  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:28.314513  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:28.382755  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:28.382775  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:28.411608  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:28.411626  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:30.977998  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:30.988313  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:30.988371  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:31.017637  396441 cri.go:89] found id: ""
	I1213 10:51:31.017652  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.017659  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:31.017664  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:31.017739  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:31.051049  396441 cri.go:89] found id: ""
	I1213 10:51:31.051064  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.051071  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:31.051076  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:31.051147  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:31.091994  396441 cri.go:89] found id: ""
	I1213 10:51:31.092012  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.092019  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:31.092025  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:31.092087  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:31.121068  396441 cri.go:89] found id: ""
	I1213 10:51:31.121083  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.121090  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:31.121095  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:31.121154  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:31.148227  396441 cri.go:89] found id: ""
	I1213 10:51:31.148240  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.148248  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:31.148253  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:31.148309  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:31.174904  396441 cri.go:89] found id: ""
	I1213 10:51:31.174919  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.174926  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:31.174932  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:31.174996  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:31.200730  396441 cri.go:89] found id: ""
	I1213 10:51:31.200743  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.200750  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:31.200757  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:31.200768  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:31.215296  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:31.215315  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:31.279266  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:31.270976   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.271649   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.273219   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.273818   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.275412   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:31.270976   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.271649   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.273219   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.273818   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.275412   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:31.279277  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:31.279286  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:31.346253  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:31.346273  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:31.374790  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:31.374805  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:33.942724  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:33.953904  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:33.953965  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:33.979791  396441 cri.go:89] found id: ""
	I1213 10:51:33.979806  396441 logs.go:282] 0 containers: []
	W1213 10:51:33.979813  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:33.979819  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:33.979882  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:34.009113  396441 cri.go:89] found id: ""
	I1213 10:51:34.009129  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.009139  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:34.009145  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:34.009213  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:34.054885  396441 cri.go:89] found id: ""
	I1213 10:51:34.054903  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.054911  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:34.054917  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:34.054978  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:34.087332  396441 cri.go:89] found id: ""
	I1213 10:51:34.087346  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.087354  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:34.087360  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:34.087416  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:34.118541  396441 cri.go:89] found id: ""
	I1213 10:51:34.118556  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.118563  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:34.118568  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:34.118626  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:34.148286  396441 cri.go:89] found id: ""
	I1213 10:51:34.148300  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.148308  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:34.148313  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:34.148368  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:34.174436  396441 cri.go:89] found id: ""
	I1213 10:51:34.174450  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.174457  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:34.174465  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:34.174484  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:34.239233  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:34.239255  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:34.253915  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:34.253932  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:34.319992  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:34.311539   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.312044   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.313591   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.313998   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.315450   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:34.311539   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.312044   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.313591   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.313998   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.315450   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:34.320001  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:34.320011  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:34.387971  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:34.387992  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:36.918587  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:36.930360  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:36.930424  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:36.956712  396441 cri.go:89] found id: ""
	I1213 10:51:36.956726  396441 logs.go:282] 0 containers: []
	W1213 10:51:36.956733  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:36.956738  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:36.956795  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:36.982448  396441 cri.go:89] found id: ""
	I1213 10:51:36.982462  396441 logs.go:282] 0 containers: []
	W1213 10:51:36.982469  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:36.982474  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:36.982541  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:37.014971  396441 cri.go:89] found id: ""
	I1213 10:51:37.014987  396441 logs.go:282] 0 containers: []
	W1213 10:51:37.014994  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:37.015000  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:37.015090  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:37.045960  396441 cri.go:89] found id: ""
	I1213 10:51:37.045974  396441 logs.go:282] 0 containers: []
	W1213 10:51:37.045981  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:37.045987  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:37.046044  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:37.077901  396441 cri.go:89] found id: ""
	I1213 10:51:37.077915  396441 logs.go:282] 0 containers: []
	W1213 10:51:37.077933  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:37.077938  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:37.077995  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:37.105187  396441 cri.go:89] found id: ""
	I1213 10:51:37.105207  396441 logs.go:282] 0 containers: []
	W1213 10:51:37.105214  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:37.105220  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:37.105275  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:37.134077  396441 cri.go:89] found id: ""
	I1213 10:51:37.134102  396441 logs.go:282] 0 containers: []
	W1213 10:51:37.134110  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:37.134118  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:37.134129  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:37.199336  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:37.199355  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:37.213787  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:37.213808  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:37.282802  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:37.274301   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.275006   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.276647   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.277214   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.278711   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:37.274301   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.275006   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.276647   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.277214   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.278711   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:37.282817  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:37.282827  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:37.352930  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:37.352958  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:39.888029  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:39.898120  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:39.898197  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:39.925423  396441 cri.go:89] found id: ""
	I1213 10:51:39.925437  396441 logs.go:282] 0 containers: []
	W1213 10:51:39.925444  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:39.925450  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:39.925510  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:39.951432  396441 cri.go:89] found id: ""
	I1213 10:51:39.951446  396441 logs.go:282] 0 containers: []
	W1213 10:51:39.951454  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:39.951459  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:39.951547  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:39.977216  396441 cri.go:89] found id: ""
	I1213 10:51:39.977231  396441 logs.go:282] 0 containers: []
	W1213 10:51:39.977238  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:39.977244  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:39.977298  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:40.019791  396441 cri.go:89] found id: ""
	I1213 10:51:40.019808  396441 logs.go:282] 0 containers: []
	W1213 10:51:40.019816  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:40.019823  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:40.019900  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:40.051826  396441 cri.go:89] found id: ""
	I1213 10:51:40.051840  396441 logs.go:282] 0 containers: []
	W1213 10:51:40.051847  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:40.051853  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:40.051928  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:40.091165  396441 cri.go:89] found id: ""
	I1213 10:51:40.091192  396441 logs.go:282] 0 containers: []
	W1213 10:51:40.091200  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:40.091206  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:40.091272  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:40.122957  396441 cri.go:89] found id: ""
	I1213 10:51:40.122972  396441 logs.go:282] 0 containers: []
	W1213 10:51:40.122979  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:40.122986  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:40.122998  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:40.186192  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:40.177419   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.178220   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.179932   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.180506   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.182150   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:40.177419   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.178220   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.179932   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.180506   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.182150   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:40.186204  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:40.186214  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:40.252986  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:40.253005  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:40.283019  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:40.283042  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:40.347489  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:40.347521  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:42.863361  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:42.874757  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:42.874824  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:42.899348  396441 cri.go:89] found id: ""
	I1213 10:51:42.899362  396441 logs.go:282] 0 containers: []
	W1213 10:51:42.899370  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:42.899375  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:42.899440  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:42.925079  396441 cri.go:89] found id: ""
	I1213 10:51:42.925092  396441 logs.go:282] 0 containers: []
	W1213 10:51:42.925100  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:42.925105  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:42.925165  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:42.951388  396441 cri.go:89] found id: ""
	I1213 10:51:42.951403  396441 logs.go:282] 0 containers: []
	W1213 10:51:42.951410  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:42.951415  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:42.951470  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:42.977668  396441 cri.go:89] found id: ""
	I1213 10:51:42.977682  396441 logs.go:282] 0 containers: []
	W1213 10:51:42.977688  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:42.977694  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:42.977748  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:43.002136  396441 cri.go:89] found id: ""
	I1213 10:51:43.002150  396441 logs.go:282] 0 containers: []
	W1213 10:51:43.002157  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:43.002162  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:43.002219  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:43.038950  396441 cri.go:89] found id: ""
	I1213 10:51:43.038963  396441 logs.go:282] 0 containers: []
	W1213 10:51:43.038971  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:43.038976  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:43.039033  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:43.071573  396441 cri.go:89] found id: ""
	I1213 10:51:43.071588  396441 logs.go:282] 0 containers: []
	W1213 10:51:43.071595  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:43.071602  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:43.071615  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:43.141998  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:43.142019  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:43.157258  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:43.157274  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:43.224710  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:43.216651   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.217035   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.218535   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.218962   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.220859   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:43.216651   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.217035   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.218535   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.218962   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.220859   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:43.224720  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:43.224731  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:43.294968  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:43.294988  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:45.825007  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:45.835672  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:45.835743  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:45.861353  396441 cri.go:89] found id: ""
	I1213 10:51:45.861375  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.861382  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:45.861388  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:45.861452  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:45.888508  396441 cri.go:89] found id: ""
	I1213 10:51:45.888522  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.888530  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:45.888534  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:45.888594  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:45.915026  396441 cri.go:89] found id: ""
	I1213 10:51:45.915040  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.915049  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:45.915054  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:45.915108  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:45.940299  396441 cri.go:89] found id: ""
	I1213 10:51:45.940313  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.940320  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:45.940325  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:45.940382  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:45.965643  396441 cri.go:89] found id: ""
	I1213 10:51:45.965657  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.965664  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:45.965669  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:45.965722  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:45.992269  396441 cri.go:89] found id: ""
	I1213 10:51:45.992283  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.992290  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:45.992295  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:45.992354  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:46.024907  396441 cri.go:89] found id: ""
	I1213 10:51:46.024922  396441 logs.go:282] 0 containers: []
	W1213 10:51:46.024941  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:46.024950  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:46.024980  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:46.072645  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:46.072664  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:46.144539  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:46.144569  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:46.160047  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:46.160063  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:46.224857  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:46.216357   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.217032   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.218768   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.219308   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.220994   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:46.216357   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.217032   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.218768   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.219308   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.220994   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:46.224867  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:46.224878  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:48.792536  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:48.802577  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:48.802642  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:48.826706  396441 cri.go:89] found id: ""
	I1213 10:51:48.826720  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.826727  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:48.826733  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:48.826787  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:48.851205  396441 cri.go:89] found id: ""
	I1213 10:51:48.851219  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.851226  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:48.851232  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:48.851286  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:48.875646  396441 cri.go:89] found id: ""
	I1213 10:51:48.875661  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.875669  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:48.875674  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:48.875742  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:48.902019  396441 cri.go:89] found id: ""
	I1213 10:51:48.902033  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.902041  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:48.902046  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:48.902102  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:48.926529  396441 cri.go:89] found id: ""
	I1213 10:51:48.926543  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.926550  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:48.926555  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:48.926610  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:48.952549  396441 cri.go:89] found id: ""
	I1213 10:51:48.952563  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.952570  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:48.952576  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:48.952637  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:48.977178  396441 cri.go:89] found id: ""
	I1213 10:51:48.977191  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.977198  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:48.977206  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:48.977218  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:49.044123  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:49.044147  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:49.066217  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:49.066239  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:49.145635  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:49.136657   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.137144   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.139046   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.139577   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.141421   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:49.136657   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.137144   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.139046   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.139577   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.141421   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:49.145645  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:49.145655  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:49.212965  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:49.212984  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:51.744115  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:51.755896  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:51.755984  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:51.790945  396441 cri.go:89] found id: ""
	I1213 10:51:51.790958  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.790965  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:51.790970  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:51.791024  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:51.816688  396441 cri.go:89] found id: ""
	I1213 10:51:51.816702  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.816709  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:51.816715  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:51.816782  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:51.841873  396441 cri.go:89] found id: ""
	I1213 10:51:51.841886  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.841893  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:51.841898  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:51.841955  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:51.867108  396441 cri.go:89] found id: ""
	I1213 10:51:51.867121  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.867129  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:51.867134  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:51.867187  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:51.892370  396441 cri.go:89] found id: ""
	I1213 10:51:51.892383  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.892390  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:51.892395  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:51.892453  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:51.923043  396441 cri.go:89] found id: ""
	I1213 10:51:51.923057  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.923064  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:51.923069  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:51.923159  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:51.948869  396441 cri.go:89] found id: ""
	I1213 10:51:51.948882  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.948889  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:51.948897  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:51.948926  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:52.018383  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:52.006286   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.007111   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.008967   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.009594   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.011259   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:52.006286   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.007111   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.008967   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.009594   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.011259   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:52.018405  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:52.018422  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:52.099342  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:52.099363  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:52.136780  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:52.136795  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:52.202388  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:52.202408  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:54.716950  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:54.726860  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:54.726918  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:54.751377  396441 cri.go:89] found id: ""
	I1213 10:51:54.751389  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.751396  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:54.751401  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:54.751460  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:54.776769  396441 cri.go:89] found id: ""
	I1213 10:51:54.776782  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.776801  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:54.776806  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:54.776871  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:54.806646  396441 cri.go:89] found id: ""
	I1213 10:51:54.806659  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.806666  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:54.806671  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:54.806727  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:54.834243  396441 cri.go:89] found id: ""
	I1213 10:51:54.834256  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.834264  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:54.834269  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:54.834322  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:54.859938  396441 cri.go:89] found id: ""
	I1213 10:51:54.859958  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.859965  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:54.859970  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:54.860025  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:54.886545  396441 cri.go:89] found id: ""
	I1213 10:51:54.886559  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.886565  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:54.886571  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:54.886633  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:54.911784  396441 cri.go:89] found id: ""
	I1213 10:51:54.911798  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.911805  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:54.911812  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:54.911828  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:54.973210  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:54.965415   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.965956   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.967424   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.968013   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.969442   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:54.965415   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.965956   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.967424   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.968013   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.969442   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:54.973220  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:54.973230  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:55.051411  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:55.051430  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:55.085480  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:55.085497  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:55.151220  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:55.151241  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:57.666660  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:57.676624  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:57.676689  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:57.702082  396441 cri.go:89] found id: ""
	I1213 10:51:57.702095  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.702103  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:57.702108  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:57.702171  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:57.727577  396441 cri.go:89] found id: ""
	I1213 10:51:57.727591  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.727598  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:57.727603  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:57.727657  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:57.752756  396441 cri.go:89] found id: ""
	I1213 10:51:57.752770  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.752777  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:57.752782  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:57.752846  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:57.778022  396441 cri.go:89] found id: ""
	I1213 10:51:57.778036  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.778043  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:57.778048  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:57.778108  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:57.803300  396441 cri.go:89] found id: ""
	I1213 10:51:57.803314  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.803321  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:57.803326  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:57.803385  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:57.828374  396441 cri.go:89] found id: ""
	I1213 10:51:57.828389  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.828396  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:57.828402  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:57.828457  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:57.854910  396441 cri.go:89] found id: ""
	I1213 10:51:57.854925  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.854947  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:57.854955  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:57.854965  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:57.919106  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:57.919126  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:57.933832  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:57.933847  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:58.000903  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:57.992995   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.993480   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.994938   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.995239   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.996659   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:57.992995   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.993480   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.994938   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.995239   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.996659   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:58.000914  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:58.000925  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:58.077434  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:58.077453  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:00.612878  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:00.623959  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:00.624026  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:00.653620  396441 cri.go:89] found id: ""
	I1213 10:52:00.653635  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.653642  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:00.653647  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:00.653705  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:00.679802  396441 cri.go:89] found id: ""
	I1213 10:52:00.679818  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.679825  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:00.679830  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:00.679890  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:00.706677  396441 cri.go:89] found id: ""
	I1213 10:52:00.706691  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.706698  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:00.706703  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:00.706759  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:00.734612  396441 cri.go:89] found id: ""
	I1213 10:52:00.734627  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.734634  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:00.734640  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:00.734697  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:00.761763  396441 cri.go:89] found id: ""
	I1213 10:52:00.761777  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.761784  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:00.761790  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:00.761846  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:00.790057  396441 cri.go:89] found id: ""
	I1213 10:52:00.790071  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.790078  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:00.790083  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:00.790140  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:00.816353  396441 cri.go:89] found id: ""
	I1213 10:52:00.816367  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.816374  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:00.816381  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:00.816391  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:00.881315  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:00.881335  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:00.896220  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:00.896239  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:00.961380  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:00.953176   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.953559   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.955115   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.955439   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.957035   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:00.953176   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.953559   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.955115   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.955439   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.957035   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:00.961391  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:00.961401  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:01.031353  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:01.031373  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:03.565879  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:03.575985  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:03.576043  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:03.605780  396441 cri.go:89] found id: ""
	I1213 10:52:03.605794  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.605801  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:03.605807  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:03.605864  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:03.630990  396441 cri.go:89] found id: ""
	I1213 10:52:03.631006  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.631013  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:03.631018  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:03.631073  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:03.658564  396441 cri.go:89] found id: ""
	I1213 10:52:03.658578  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.658585  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:03.658590  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:03.658645  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:03.689093  396441 cri.go:89] found id: ""
	I1213 10:52:03.689108  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.689116  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:03.689121  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:03.689179  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:03.714786  396441 cri.go:89] found id: ""
	I1213 10:52:03.714800  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.714807  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:03.714812  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:03.714870  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:03.741755  396441 cri.go:89] found id: ""
	I1213 10:52:03.741769  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.741777  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:03.741783  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:03.741841  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:03.771487  396441 cri.go:89] found id: ""
	I1213 10:52:03.771502  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.771509  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:03.771538  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:03.771548  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:03.800650  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:03.800666  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:03.866429  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:03.866448  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:03.882243  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:03.882260  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:03.951157  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:03.941996   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.942648   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.944288   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.944871   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.946634   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:03.941996   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.942648   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.944288   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.944871   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.946634   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:03.951167  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:03.951190  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:06.522609  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:06.532880  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:06.532944  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:06.557937  396441 cri.go:89] found id: ""
	I1213 10:52:06.557952  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.557959  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:06.557965  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:06.558020  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:06.588572  396441 cri.go:89] found id: ""
	I1213 10:52:06.588586  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.588595  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:06.588600  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:06.588660  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:06.614455  396441 cri.go:89] found id: ""
	I1213 10:52:06.614468  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.614476  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:06.614481  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:06.614546  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:06.640258  396441 cri.go:89] found id: ""
	I1213 10:52:06.640272  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.640279  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:06.640285  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:06.640341  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:06.666195  396441 cri.go:89] found id: ""
	I1213 10:52:06.666209  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.666216  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:06.666222  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:06.666278  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:06.690768  396441 cri.go:89] found id: ""
	I1213 10:52:06.690781  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.690788  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:06.690793  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:06.690846  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:06.714814  396441 cri.go:89] found id: ""
	I1213 10:52:06.714828  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.714835  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:06.714842  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:06.714852  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:06.779445  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:06.779463  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:06.794405  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:06.794419  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:06.863881  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:06.854615   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.855387   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.857219   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.857866   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.858840   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:06.854615   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.855387   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.857219   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.857866   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.858840   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:06.863893  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:06.863903  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:06.931872  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:06.931893  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:09.461689  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:09.471808  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:09.471866  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:09.498684  396441 cri.go:89] found id: ""
	I1213 10:52:09.498698  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.498705  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:09.498710  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:09.498770  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:09.525226  396441 cri.go:89] found id: ""
	I1213 10:52:09.525240  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.525248  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:09.525253  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:09.525312  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:09.552412  396441 cri.go:89] found id: ""
	I1213 10:52:09.552426  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.552433  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:09.552438  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:09.552496  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:09.581636  396441 cri.go:89] found id: ""
	I1213 10:52:09.581650  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.581657  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:09.581662  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:09.581717  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:09.606899  396441 cri.go:89] found id: ""
	I1213 10:52:09.606913  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.606926  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:09.606931  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:09.606985  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:09.635899  396441 cri.go:89] found id: ""
	I1213 10:52:09.635913  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.635920  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:09.635926  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:09.635990  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:09.660294  396441 cri.go:89] found id: ""
	I1213 10:52:09.660308  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.660315  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:09.660322  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:09.660332  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:09.727938  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:09.727956  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:09.742322  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:09.742337  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:09.806667  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:09.798536   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.798981   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.800481   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.800865   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.802370   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:09.798536   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.798981   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.800481   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.800865   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.802370   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:09.806677  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:09.806688  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:09.873384  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:09.873405  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:12.403419  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:12.413610  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:12.413670  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:12.439264  396441 cri.go:89] found id: ""
	I1213 10:52:12.439277  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.439285  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:12.439290  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:12.439347  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:12.464906  396441 cri.go:89] found id: ""
	I1213 10:52:12.464920  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.464927  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:12.464932  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:12.464988  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:12.498036  396441 cri.go:89] found id: ""
	I1213 10:52:12.498050  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.498057  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:12.498062  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:12.498124  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:12.527408  396441 cri.go:89] found id: ""
	I1213 10:52:12.527424  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.527432  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:12.527437  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:12.527493  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:12.553426  396441 cri.go:89] found id: ""
	I1213 10:52:12.553440  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.553449  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:12.553456  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:12.553512  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:12.577801  396441 cri.go:89] found id: ""
	I1213 10:52:12.577821  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.577829  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:12.577834  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:12.577892  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:12.602596  396441 cri.go:89] found id: ""
	I1213 10:52:12.602610  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.602617  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:12.602625  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:12.602636  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:12.617159  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:12.617175  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:12.679319  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:12.671034   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.671563   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.673241   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.673891   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.675542   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:12.671034   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.671563   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.673241   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.673891   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.675542   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:12.679331  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:12.679344  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:12.750080  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:12.750100  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:12.781595  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:12.781612  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:15.350487  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:15.360659  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:15.360718  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:15.387859  396441 cri.go:89] found id: ""
	I1213 10:52:15.387872  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.387879  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:15.387885  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:15.387938  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:15.414186  396441 cri.go:89] found id: ""
	I1213 10:52:15.414200  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.414207  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:15.414212  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:15.414279  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:15.441078  396441 cri.go:89] found id: ""
	I1213 10:52:15.441093  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.441099  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:15.441105  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:15.441160  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:15.469023  396441 cri.go:89] found id: ""
	I1213 10:52:15.469038  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.469045  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:15.469051  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:15.469107  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:15.497840  396441 cri.go:89] found id: ""
	I1213 10:52:15.497855  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.497862  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:15.497870  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:15.497929  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:15.527216  396441 cri.go:89] found id: ""
	I1213 10:52:15.527240  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.527248  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:15.527253  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:15.527318  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:15.552512  396441 cri.go:89] found id: ""
	I1213 10:52:15.552526  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.552533  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:15.552541  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:15.552551  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:15.566854  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:15.566872  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:15.630069  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:15.622023   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.622578   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.624163   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.624769   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.626104   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:15.622023   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.622578   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.624163   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.624769   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.626104   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:15.630081  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:15.630091  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:15.696860  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:15.696880  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:15.724271  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:15.724287  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:18.289647  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:18.301895  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:18.301952  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:18.337658  396441 cri.go:89] found id: ""
	I1213 10:52:18.337672  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.337679  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:18.337684  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:18.337739  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:18.362954  396441 cri.go:89] found id: ""
	I1213 10:52:18.362968  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.362975  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:18.362980  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:18.363038  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:18.388674  396441 cri.go:89] found id: ""
	I1213 10:52:18.388687  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.388694  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:18.388699  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:18.388759  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:18.420176  396441 cri.go:89] found id: ""
	I1213 10:52:18.420189  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.420196  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:18.420202  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:18.420264  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:18.445491  396441 cri.go:89] found id: ""
	I1213 10:52:18.445505  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.445513  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:18.445518  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:18.445579  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:18.470012  396441 cri.go:89] found id: ""
	I1213 10:52:18.470026  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.470034  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:18.470039  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:18.470097  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:18.495243  396441 cri.go:89] found id: ""
	I1213 10:52:18.495257  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.495264  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:18.495271  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:18.495282  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:18.563479  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:18.563500  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:18.578295  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:18.578311  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:18.646148  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:18.637765   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.638446   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.640058   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.640577   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.642125   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:18.637765   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.638446   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.640058   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.640577   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.642125   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:18.646163  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:18.646174  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:18.718257  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:18.718284  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:21.249994  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:21.259664  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:21.259726  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:21.295330  396441 cri.go:89] found id: ""
	I1213 10:52:21.295344  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.295352  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:21.295359  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:21.295416  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:21.321231  396441 cri.go:89] found id: ""
	I1213 10:52:21.321244  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.321252  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:21.321257  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:21.321315  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:21.352593  396441 cri.go:89] found id: ""
	I1213 10:52:21.352607  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.352615  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:21.352620  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:21.352673  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:21.377931  396441 cri.go:89] found id: ""
	I1213 10:52:21.377946  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.377953  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:21.377959  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:21.378013  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:21.402837  396441 cri.go:89] found id: ""
	I1213 10:52:21.402851  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.402857  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:21.402863  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:21.402917  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:21.431840  396441 cri.go:89] found id: ""
	I1213 10:52:21.431855  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.431862  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:21.431867  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:21.431923  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:21.456743  396441 cri.go:89] found id: ""
	I1213 10:52:21.456757  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.456764  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:21.456772  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:21.456783  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:21.524923  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:21.524943  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:21.539831  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:21.539847  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:21.606862  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:21.598783   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.599644   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.601151   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.601554   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.603029   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:21.598783   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.599644   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.601151   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.601554   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.603029   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:21.606873  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:21.606883  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:21.674639  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:21.674658  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:24.206551  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:24.216405  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:24.216463  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:24.242228  396441 cri.go:89] found id: ""
	I1213 10:52:24.242242  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.242257  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:24.242262  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:24.242323  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:24.267087  396441 cri.go:89] found id: ""
	I1213 10:52:24.267101  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.267108  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:24.267113  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:24.267165  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:24.309002  396441 cri.go:89] found id: ""
	I1213 10:52:24.309015  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.309022  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:24.309027  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:24.309094  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:24.339349  396441 cri.go:89] found id: ""
	I1213 10:52:24.339362  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.339370  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:24.339375  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:24.339432  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:24.368576  396441 cri.go:89] found id: ""
	I1213 10:52:24.368590  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.368597  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:24.368602  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:24.368659  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:24.394642  396441 cri.go:89] found id: ""
	I1213 10:52:24.394656  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.394663  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:24.394669  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:24.394733  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:24.421211  396441 cri.go:89] found id: ""
	I1213 10:52:24.421225  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.421232  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:24.421240  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:24.421250  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:24.487558  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:24.479220   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.479760   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.481451   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.481967   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.483636   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:24.479220   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.479760   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.481451   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.481967   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.483636   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:24.487569  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:24.487579  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:24.558449  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:24.558469  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:24.588318  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:24.588333  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:24.654250  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:24.654270  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:27.169201  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:27.180049  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:27.180109  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:27.206061  396441 cri.go:89] found id: ""
	I1213 10:52:27.206075  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.206082  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:27.206096  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:27.206154  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:27.233191  396441 cri.go:89] found id: ""
	I1213 10:52:27.233205  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.233214  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:27.233219  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:27.233281  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:27.260006  396441 cri.go:89] found id: ""
	I1213 10:52:27.260026  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.260034  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:27.260039  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:27.260097  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:27.297935  396441 cri.go:89] found id: ""
	I1213 10:52:27.297949  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.297956  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:27.297962  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:27.298016  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:27.327550  396441 cri.go:89] found id: ""
	I1213 10:52:27.327564  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.327571  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:27.327576  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:27.327632  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:27.357264  396441 cri.go:89] found id: ""
	I1213 10:52:27.357277  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.357285  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:27.357290  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:27.357345  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:27.386557  396441 cri.go:89] found id: ""
	I1213 10:52:27.386571  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.386579  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:27.386587  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:27.386600  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:27.451879  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:27.451900  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:27.466743  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:27.466762  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:27.534974  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:27.526464   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.527041   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.528790   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.529428   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.530940   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:27.526464   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.527041   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.528790   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.529428   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.530940   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:27.534984  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:27.534996  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:27.603674  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:27.603693  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:30.134007  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:30.145384  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:30.145454  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:30.177035  396441 cri.go:89] found id: ""
	I1213 10:52:30.177050  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.177058  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:30.177063  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:30.177121  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:30.203582  396441 cri.go:89] found id: ""
	I1213 10:52:30.203597  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.203604  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:30.203609  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:30.203689  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:30.230074  396441 cri.go:89] found id: ""
	I1213 10:52:30.230088  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.230106  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:30.230112  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:30.230183  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:30.255406  396441 cri.go:89] found id: ""
	I1213 10:52:30.255431  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.255439  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:30.255445  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:30.255527  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:30.302847  396441 cri.go:89] found id: ""
	I1213 10:52:30.302861  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.302869  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:30.302876  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:30.302931  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:30.345708  396441 cri.go:89] found id: ""
	I1213 10:52:30.345722  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.345730  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:30.345735  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:30.345794  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:30.373285  396441 cri.go:89] found id: ""
	I1213 10:52:30.373298  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.373305  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:30.373313  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:30.373323  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:30.438965  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:30.438984  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:30.453939  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:30.453957  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:30.519205  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:30.509989   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.510631   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.512097   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.512762   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.515602   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:30.509989   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.510631   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.512097   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.512762   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.515602   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:30.519233  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:30.519245  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:30.587307  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:30.587327  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:33.117585  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:33.128213  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:33.128278  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:33.159433  396441 cri.go:89] found id: ""
	I1213 10:52:33.159447  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.159455  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:33.159462  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:33.159561  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:33.188876  396441 cri.go:89] found id: ""
	I1213 10:52:33.188890  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.188898  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:33.188904  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:33.188959  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:33.213013  396441 cri.go:89] found id: ""
	I1213 10:52:33.213026  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.213033  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:33.213038  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:33.213098  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:33.237950  396441 cri.go:89] found id: ""
	I1213 10:52:33.237964  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.237971  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:33.237976  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:33.238030  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:33.262873  396441 cri.go:89] found id: ""
	I1213 10:52:33.262887  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.262894  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:33.262899  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:33.262955  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:33.289230  396441 cri.go:89] found id: ""
	I1213 10:52:33.289243  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.289250  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:33.289256  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:33.289312  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:33.322162  396441 cri.go:89] found id: ""
	I1213 10:52:33.322175  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.322182  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:33.322196  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:33.322206  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:33.350122  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:33.350138  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:33.415463  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:33.415483  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:33.430091  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:33.430108  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:33.492694  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:33.484780   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.485349   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.486880   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.487242   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.488741   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:33.484780   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.485349   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.486880   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.487242   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.488741   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:33.492704  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:33.492713  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:36.059928  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:36.071377  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:36.071452  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:36.097664  396441 cri.go:89] found id: ""
	I1213 10:52:36.097678  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.097685  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:36.097691  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:36.097753  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:36.123266  396441 cri.go:89] found id: ""
	I1213 10:52:36.123280  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.123287  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:36.123292  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:36.123348  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:36.149443  396441 cri.go:89] found id: ""
	I1213 10:52:36.149456  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.149464  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:36.149469  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:36.149525  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:36.174882  396441 cri.go:89] found id: ""
	I1213 10:52:36.174896  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.174903  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:36.174909  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:36.174965  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:36.204325  396441 cri.go:89] found id: ""
	I1213 10:52:36.204348  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.204356  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:36.204362  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:36.204427  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:36.234444  396441 cri.go:89] found id: ""
	I1213 10:52:36.234457  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.234474  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:36.234479  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:36.234550  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:36.259366  396441 cri.go:89] found id: ""
	I1213 10:52:36.259390  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.259397  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:36.259406  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:36.259416  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:36.332816  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:36.332834  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:36.348343  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:36.348362  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:36.412337  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:36.404175   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.404717   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.406173   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.406606   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.408021   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:36.404175   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.404717   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.406173   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.406606   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.408021   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:36.412348  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:36.412358  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:36.480447  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:36.480469  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:39.011418  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:39.022791  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:39.022856  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:39.048926  396441 cri.go:89] found id: ""
	I1213 10:52:39.048939  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.048946  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:39.048951  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:39.049008  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:39.074187  396441 cri.go:89] found id: ""
	I1213 10:52:39.074201  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.074209  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:39.074214  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:39.074274  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:39.099262  396441 cri.go:89] found id: ""
	I1213 10:52:39.099275  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.099282  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:39.099288  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:39.099351  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:39.123854  396441 cri.go:89] found id: ""
	I1213 10:52:39.123868  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.123876  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:39.123881  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:39.123935  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:39.148849  396441 cri.go:89] found id: ""
	I1213 10:52:39.148864  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.148871  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:39.148876  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:39.148937  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:39.178852  396441 cri.go:89] found id: ""
	I1213 10:52:39.178866  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.178873  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:39.178879  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:39.178936  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:39.203878  396441 cri.go:89] found id: ""
	I1213 10:52:39.203892  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.203899  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:39.203907  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:39.203921  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:39.270764  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:39.270783  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:39.286957  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:39.286976  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:39.359682  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:39.351441   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.352404   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.354057   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.354437   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.355940   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:39.351441   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.352404   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.354057   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.354437   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.355940   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:39.359693  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:39.359707  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:39.429853  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:39.429874  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:41.960684  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:41.971667  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:41.971727  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:42.002821  396441 cri.go:89] found id: ""
	I1213 10:52:42.002836  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.002844  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:42.002849  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:42.002914  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:42.045054  396441 cri.go:89] found id: ""
	I1213 10:52:42.045068  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.045075  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:42.045080  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:42.045141  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:42.077836  396441 cri.go:89] found id: ""
	I1213 10:52:42.077852  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.077865  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:42.077871  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:42.077947  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:42.115684  396441 cri.go:89] found id: ""
	I1213 10:52:42.115706  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.115714  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:42.115729  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:42.115828  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:42.147177  396441 cri.go:89] found id: ""
	I1213 10:52:42.147194  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.147202  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:42.147208  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:42.147280  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:42.180144  396441 cri.go:89] found id: ""
	I1213 10:52:42.180165  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.180174  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:42.180181  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:42.180255  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:42.220442  396441 cri.go:89] found id: ""
	I1213 10:52:42.220457  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.220466  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:42.220475  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:42.220486  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:42.297964  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:42.297984  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:42.315552  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:42.315571  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:42.388538  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:42.380217   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.380830   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.382313   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.382956   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.384571   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:42.380217   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.380830   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.382313   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.382956   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.384571   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:42.388548  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:42.388558  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:42.457255  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:42.457276  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:44.987527  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:44.999384  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:44.999443  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:45.050333  396441 cri.go:89] found id: ""
	I1213 10:52:45.050351  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.050366  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:45.050372  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:45.050449  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:45.102093  396441 cri.go:89] found id: ""
	I1213 10:52:45.102110  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.102126  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:45.102132  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:45.102218  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:45.141159  396441 cri.go:89] found id: ""
	I1213 10:52:45.141176  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.141184  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:45.141190  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:45.141265  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:45.181959  396441 cri.go:89] found id: ""
	I1213 10:52:45.181976  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.181994  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:45.182000  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:45.182074  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:45.231005  396441 cri.go:89] found id: ""
	I1213 10:52:45.231020  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.231027  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:45.231033  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:45.231103  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:45.269802  396441 cri.go:89] found id: ""
	I1213 10:52:45.269816  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.269824  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:45.269829  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:45.269906  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:45.302267  396441 cri.go:89] found id: ""
	I1213 10:52:45.302281  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.302289  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:45.302297  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:45.302307  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:45.375709  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:45.375731  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:45.390641  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:45.390662  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:45.456742  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:45.449052   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.449482   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.451067   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.451394   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.452876   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:45.449052   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.449482   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.451067   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.451394   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.452876   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:45.456753  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:45.456763  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:45.525649  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:45.525668  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:48.060311  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:48.071648  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:48.071715  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:48.102851  396441 cri.go:89] found id: ""
	I1213 10:52:48.102865  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.102872  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:48.102878  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:48.102948  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:48.128470  396441 cri.go:89] found id: ""
	I1213 10:52:48.128485  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.128492  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:48.128499  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:48.128556  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:48.155177  396441 cri.go:89] found id: ""
	I1213 10:52:48.155197  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.155205  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:48.155210  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:48.155265  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:48.182358  396441 cri.go:89] found id: ""
	I1213 10:52:48.182373  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.182380  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:48.182385  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:48.182447  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:48.208531  396441 cri.go:89] found id: ""
	I1213 10:52:48.208550  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.208557  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:48.208562  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:48.208616  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:48.234008  396441 cri.go:89] found id: ""
	I1213 10:52:48.234023  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.234031  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:48.234036  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:48.234093  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:48.261447  396441 cri.go:89] found id: ""
	I1213 10:52:48.261461  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.261469  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:48.261480  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:48.261492  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:48.278413  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:48.278429  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:48.358811  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:48.350678   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.351326   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.352876   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.353394   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.354912   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:48.350678   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.351326   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.352876   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.353394   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.354912   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:48.358821  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:48.358832  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:48.433414  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:48.433443  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:48.466431  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:48.466452  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:51.033966  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:51.044258  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:51.044317  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:51.072809  396441 cri.go:89] found id: ""
	I1213 10:52:51.072823  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.072830  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:51.072836  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:51.072895  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:51.102333  396441 cri.go:89] found id: ""
	I1213 10:52:51.102346  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.102353  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:51.102358  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:51.102415  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:51.128414  396441 cri.go:89] found id: ""
	I1213 10:52:51.128427  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.128434  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:51.128439  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:51.128494  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:51.154902  396441 cri.go:89] found id: ""
	I1213 10:52:51.154916  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.154923  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:51.154928  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:51.154983  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:51.182112  396441 cri.go:89] found id: ""
	I1213 10:52:51.182126  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.182133  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:51.182143  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:51.182197  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:51.207919  396441 cri.go:89] found id: ""
	I1213 10:52:51.207933  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.207941  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:51.207946  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:51.208001  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:51.234193  396441 cri.go:89] found id: ""
	I1213 10:52:51.234207  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.234214  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:51.234222  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:51.234238  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:51.303042  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:51.303060  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:51.321366  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:51.321383  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:51.393364  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:51.385234   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.385964   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.387481   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.387938   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.389445   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:51.385234   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.385964   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.387481   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.387938   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.389445   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:51.393375  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:51.393385  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:51.461747  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:51.461768  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:53.992488  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:54.002605  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:54.002667  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:54.037835  396441 cri.go:89] found id: ""
	I1213 10:52:54.037849  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.037857  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:54.037862  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:54.037934  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:54.066982  396441 cri.go:89] found id: ""
	I1213 10:52:54.066998  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.067009  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:54.067015  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:54.067074  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:54.093461  396441 cri.go:89] found id: ""
	I1213 10:52:54.093475  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.093482  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:54.093487  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:54.093544  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:54.123249  396441 cri.go:89] found id: ""
	I1213 10:52:54.123263  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.123271  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:54.123276  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:54.123333  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:54.150103  396441 cri.go:89] found id: ""
	I1213 10:52:54.150116  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.150124  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:54.150130  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:54.150186  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:54.176271  396441 cri.go:89] found id: ""
	I1213 10:52:54.176285  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.176291  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:54.176296  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:54.176355  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:54.204655  396441 cri.go:89] found id: ""
	I1213 10:52:54.204669  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.204676  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:54.204684  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:54.204695  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:54.270252  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:54.259997   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.260697   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.262376   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.262983   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.264572   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:54.259997   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.260697   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.262376   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.262983   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.264572   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:54.270262  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:54.270272  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:54.345996  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:54.346016  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:54.383713  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:54.383730  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:54.450349  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:54.450368  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:56.966888  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:56.976557  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:56.976616  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:57.007803  396441 cri.go:89] found id: ""
	I1213 10:52:57.007828  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.007836  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:57.007842  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:57.007910  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:57.035051  396441 cri.go:89] found id: ""
	I1213 10:52:57.035065  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.035073  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:57.035078  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:57.035137  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:57.060632  396441 cri.go:89] found id: ""
	I1213 10:52:57.060645  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.060652  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:57.060657  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:57.060716  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:57.090660  396441 cri.go:89] found id: ""
	I1213 10:52:57.090674  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.090681  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:57.090686  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:57.090741  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:57.115624  396441 cri.go:89] found id: ""
	I1213 10:52:57.115638  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.115645  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:57.115650  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:57.115718  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:57.146066  396441 cri.go:89] found id: ""
	I1213 10:52:57.146080  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.146087  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:57.146093  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:57.146147  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:57.174574  396441 cri.go:89] found id: ""
	I1213 10:52:57.174589  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.174596  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:57.174604  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:57.174614  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:57.202471  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:57.202487  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:57.267828  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:57.267852  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:57.284906  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:57.284922  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:57.357618  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:57.350279   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.350835   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.351877   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.352319   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.353722   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:57.350279   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.350835   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.351877   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.352319   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.353722   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:57.357629  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:57.357641  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:59.928373  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:59.939417  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:59.939503  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:59.968871  396441 cri.go:89] found id: ""
	I1213 10:52:59.968885  396441 logs.go:282] 0 containers: []
	W1213 10:52:59.968892  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:59.968897  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:59.968952  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:59.994167  396441 cri.go:89] found id: ""
	I1213 10:52:59.994181  396441 logs.go:282] 0 containers: []
	W1213 10:52:59.994188  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:59.994192  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:59.994244  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:00.051356  396441 cri.go:89] found id: ""
	I1213 10:53:00.051372  396441 logs.go:282] 0 containers: []
	W1213 10:53:00.051380  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:00.051386  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:00.051453  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:00.143874  396441 cri.go:89] found id: ""
	I1213 10:53:00.143902  396441 logs.go:282] 0 containers: []
	W1213 10:53:00.143910  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:00.143915  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:00.143990  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:00.245636  396441 cri.go:89] found id: ""
	I1213 10:53:00.245660  396441 logs.go:282] 0 containers: []
	W1213 10:53:00.245669  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:00.245676  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:00.245762  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:00.304351  396441 cri.go:89] found id: ""
	I1213 10:53:00.304370  396441 logs.go:282] 0 containers: []
	W1213 10:53:00.304378  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:00.304384  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:00.304463  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:00.342460  396441 cri.go:89] found id: ""
	I1213 10:53:00.342483  396441 logs.go:282] 0 containers: []
	W1213 10:53:00.342492  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:00.342503  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:00.342552  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:00.422913  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:00.413257   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.414124   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.416191   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.416801   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.418644   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:00.413257   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.414124   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.416191   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.416801   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.418644   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:00.422924  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:00.422935  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:00.494010  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:00.494031  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:00.523384  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:00.523401  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:00.590600  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:00.590620  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:03.105926  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:03.116415  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:03.116476  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:03.148167  396441 cri.go:89] found id: ""
	I1213 10:53:03.148181  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.148189  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:03.148195  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:03.148255  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:03.173610  396441 cri.go:89] found id: ""
	I1213 10:53:03.173624  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.173633  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:03.173638  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:03.173698  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:03.198406  396441 cri.go:89] found id: ""
	I1213 10:53:03.198420  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.198427  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:03.198432  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:03.198494  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:03.228196  396441 cri.go:89] found id: ""
	I1213 10:53:03.228210  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.228218  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:03.228223  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:03.228284  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:03.258506  396441 cri.go:89] found id: ""
	I1213 10:53:03.258539  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.258547  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:03.258552  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:03.258617  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:03.293938  396441 cri.go:89] found id: ""
	I1213 10:53:03.293951  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.293968  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:03.293973  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:03.294029  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:03.322417  396441 cri.go:89] found id: ""
	I1213 10:53:03.322441  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.322448  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:03.322456  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:03.322467  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:03.338484  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:03.338500  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:03.404903  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:03.396282   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.397052   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.398807   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.399322   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.400968   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:03.396282   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.397052   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.398807   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.399322   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.400968   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:03.404913  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:03.404930  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:03.476102  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:03.476122  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:03.508468  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:03.508484  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:06.073576  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:06.084007  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:06.084073  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:06.110819  396441 cri.go:89] found id: ""
	I1213 10:53:06.110834  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.110841  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:06.110847  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:06.110915  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:06.136257  396441 cri.go:89] found id: ""
	I1213 10:53:06.136271  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.136278  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:06.136286  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:06.136344  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:06.162392  396441 cri.go:89] found id: ""
	I1213 10:53:06.162406  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.162413  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:06.162419  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:06.162479  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:06.191163  396441 cri.go:89] found id: ""
	I1213 10:53:06.191178  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.191185  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:06.191190  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:06.191244  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:06.217747  396441 cri.go:89] found id: ""
	I1213 10:53:06.217761  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.217769  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:06.217774  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:06.217829  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:06.242838  396441 cri.go:89] found id: ""
	I1213 10:53:06.242851  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.242858  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:06.242864  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:06.242918  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:06.267811  396441 cri.go:89] found id: ""
	I1213 10:53:06.267831  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.267838  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:06.267846  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:06.267857  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:06.351297  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:06.343103   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.343800   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.345275   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.345736   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.347181   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:06.343103   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.343800   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.345275   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.345736   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.347181   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:06.351310  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:06.351321  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:06.418677  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:06.418696  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:06.456760  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:06.456778  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:06.525341  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:06.525362  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:09.044095  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:09.054348  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:09.054410  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:09.081344  396441 cri.go:89] found id: ""
	I1213 10:53:09.081358  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.081365  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:09.081376  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:09.081434  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:09.107998  396441 cri.go:89] found id: ""
	I1213 10:53:09.108012  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.108019  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:09.108024  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:09.108084  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:09.133582  396441 cri.go:89] found id: ""
	I1213 10:53:09.133596  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.133603  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:09.133608  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:09.133666  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:09.158646  396441 cri.go:89] found id: ""
	I1213 10:53:09.158669  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.158677  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:09.158682  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:09.158746  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:09.184013  396441 cri.go:89] found id: ""
	I1213 10:53:09.184028  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.184035  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:09.184040  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:09.184097  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:09.210338  396441 cri.go:89] found id: ""
	I1213 10:53:09.210352  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.210370  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:09.210376  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:09.210434  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:09.236029  396441 cri.go:89] found id: ""
	I1213 10:53:09.236045  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.236052  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:09.236059  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:09.236069  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:09.310970  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:09.298395   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.303364   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.304232   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.305803   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.306103   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:09.298395   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.303364   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.304232   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.305803   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.306103   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:09.310981  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:09.310992  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:09.380678  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:09.380700  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:09.413354  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:09.413371  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:09.481585  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:09.481603  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:11.996259  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:12.009133  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:12.009217  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:12.044141  396441 cri.go:89] found id: ""
	I1213 10:53:12.044157  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.044164  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:12.044170  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:12.044230  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:12.070547  396441 cri.go:89] found id: ""
	I1213 10:53:12.070579  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.070587  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:12.070598  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:12.070664  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:12.095879  396441 cri.go:89] found id: ""
	I1213 10:53:12.095893  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.095900  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:12.095905  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:12.095965  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:12.125533  396441 cri.go:89] found id: ""
	I1213 10:53:12.125547  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.125554  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:12.125559  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:12.125618  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:12.151281  396441 cri.go:89] found id: ""
	I1213 10:53:12.151303  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.151311  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:12.151317  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:12.151385  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:12.176331  396441 cri.go:89] found id: ""
	I1213 10:53:12.176353  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.176361  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:12.176366  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:12.176433  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:12.202465  396441 cri.go:89] found id: ""
	I1213 10:53:12.202486  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.202493  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:12.202500  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:12.202523  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:12.268244  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:12.268263  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:12.285364  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:12.285379  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:12.357173  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:12.347625   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.348521   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.350379   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.350883   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.352352   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:12.347625   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.348521   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.350379   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.350883   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.352352   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:12.357192  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:12.357204  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:12.424809  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:12.424830  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:14.955688  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:14.967057  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:14.967115  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:14.993136  396441 cri.go:89] found id: ""
	I1213 10:53:14.993150  396441 logs.go:282] 0 containers: []
	W1213 10:53:14.993157  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:14.993163  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:14.993220  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:15.028691  396441 cri.go:89] found id: ""
	I1213 10:53:15.028707  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.028722  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:15.028728  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:15.028794  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:15.056676  396441 cri.go:89] found id: ""
	I1213 10:53:15.056705  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.056732  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:15.056739  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:15.056800  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:15.085199  396441 cri.go:89] found id: ""
	I1213 10:53:15.085213  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.085221  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:15.085226  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:15.085288  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:15.113074  396441 cri.go:89] found id: ""
	I1213 10:53:15.113088  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.113095  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:15.113101  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:15.113159  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:15.142568  396441 cri.go:89] found id: ""
	I1213 10:53:15.142581  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.142589  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:15.142595  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:15.142655  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:15.167430  396441 cri.go:89] found id: ""
	I1213 10:53:15.167443  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.167450  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:15.167458  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:15.167471  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:15.233925  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:15.233946  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:15.248849  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:15.248866  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:15.332377  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:15.324322   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.325030   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.326689   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.327007   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.328464   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:15.324322   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.325030   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.326689   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.327007   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.328464   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:15.332397  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:15.332409  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:15.401263  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:15.401283  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:17.930625  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:17.940643  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:17.940703  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:17.965657  396441 cri.go:89] found id: ""
	I1213 10:53:17.965671  396441 logs.go:282] 0 containers: []
	W1213 10:53:17.965678  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:17.965683  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:17.965740  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:17.990612  396441 cri.go:89] found id: ""
	I1213 10:53:17.990635  396441 logs.go:282] 0 containers: []
	W1213 10:53:17.990642  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:17.990648  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:17.990723  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:18.025034  396441 cri.go:89] found id: ""
	I1213 10:53:18.025049  396441 logs.go:282] 0 containers: []
	W1213 10:53:18.025057  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:18.025063  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:18.025123  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:18.052589  396441 cri.go:89] found id: ""
	I1213 10:53:18.052611  396441 logs.go:282] 0 containers: []
	W1213 10:53:18.052619  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:18.052625  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:18.052683  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:18.079906  396441 cri.go:89] found id: ""
	I1213 10:53:18.079921  396441 logs.go:282] 0 containers: []
	W1213 10:53:18.079929  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:18.079935  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:18.079997  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:18.107302  396441 cri.go:89] found id: ""
	I1213 10:53:18.107327  396441 logs.go:282] 0 containers: []
	W1213 10:53:18.107335  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:18.107340  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:18.107409  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:18.135776  396441 cri.go:89] found id: ""
	I1213 10:53:18.135790  396441 logs.go:282] 0 containers: []
	W1213 10:53:18.135797  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:18.135805  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:18.135815  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:18.153173  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:18.153189  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:18.221544  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:18.213144   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.213793   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.215340   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.215838   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.217560   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:18.213144   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.213793   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.215340   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.215838   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.217560   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:18.221554  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:18.221565  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:18.296047  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:18.296072  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:18.330043  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:18.330063  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:20.909395  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:20.919737  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:20.919799  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:20.946000  396441 cri.go:89] found id: ""
	I1213 10:53:20.946014  396441 logs.go:282] 0 containers: []
	W1213 10:53:20.946022  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:20.946027  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:20.946084  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:20.975734  396441 cri.go:89] found id: ""
	I1213 10:53:20.975749  396441 logs.go:282] 0 containers: []
	W1213 10:53:20.975756  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:20.975761  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:20.975815  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:21.000961  396441 cri.go:89] found id: ""
	I1213 10:53:21.000976  396441 logs.go:282] 0 containers: []
	W1213 10:53:21.000983  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:21.000988  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:21.001043  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:21.027875  396441 cri.go:89] found id: ""
	I1213 10:53:21.027889  396441 logs.go:282] 0 containers: []
	W1213 10:53:21.027896  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:21.027902  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:21.027963  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:21.053113  396441 cri.go:89] found id: ""
	I1213 10:53:21.053127  396441 logs.go:282] 0 containers: []
	W1213 10:53:21.053134  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:21.053140  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:21.053198  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:21.078404  396441 cri.go:89] found id: ""
	I1213 10:53:21.078418  396441 logs.go:282] 0 containers: []
	W1213 10:53:21.078425  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:21.078430  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:21.078484  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:21.103558  396441 cri.go:89] found id: ""
	I1213 10:53:21.103571  396441 logs.go:282] 0 containers: []
	W1213 10:53:21.103579  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:21.103592  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:21.103604  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:21.172527  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:21.172545  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:21.187768  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:21.187785  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:21.256696  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:21.248073   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.249061   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.249753   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.251203   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.251711   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:21.248073   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.249061   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.249753   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.251203   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.251711   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:21.256707  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:21.256717  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:21.327132  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:21.327151  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:23.867087  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:23.877218  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:23.877278  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:23.901809  396441 cri.go:89] found id: ""
	I1213 10:53:23.901824  396441 logs.go:282] 0 containers: []
	W1213 10:53:23.901831  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:23.901836  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:23.901892  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:23.928024  396441 cri.go:89] found id: ""
	I1213 10:53:23.928038  396441 logs.go:282] 0 containers: []
	W1213 10:53:23.928044  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:23.928051  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:23.928104  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:23.953141  396441 cri.go:89] found id: ""
	I1213 10:53:23.953154  396441 logs.go:282] 0 containers: []
	W1213 10:53:23.953161  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:23.953166  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:23.953223  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:23.981670  396441 cri.go:89] found id: ""
	I1213 10:53:23.981684  396441 logs.go:282] 0 containers: []
	W1213 10:53:23.981691  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:23.981696  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:23.981754  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:24.014889  396441 cri.go:89] found id: ""
	I1213 10:53:24.014904  396441 logs.go:282] 0 containers: []
	W1213 10:53:24.014912  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:24.014917  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:24.014982  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:24.041025  396441 cri.go:89] found id: ""
	I1213 10:53:24.041040  396441 logs.go:282] 0 containers: []
	W1213 10:53:24.041047  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:24.041052  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:24.041110  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:24.068555  396441 cri.go:89] found id: ""
	I1213 10:53:24.068570  396441 logs.go:282] 0 containers: []
	W1213 10:53:24.068578  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:24.068586  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:24.068596  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:24.082803  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:24.082819  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:24.145822  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:24.137676   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.138215   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.139944   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.140400   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.141928   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:24.137676   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.138215   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.139944   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.140400   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.141928   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:24.145832  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:24.145843  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:24.213727  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:24.213747  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:24.241111  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:24.241126  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:26.808221  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:26.818590  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:26.818659  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:26.848553  396441 cri.go:89] found id: ""
	I1213 10:53:26.848568  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.848575  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:26.848580  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:26.848636  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:26.878256  396441 cri.go:89] found id: ""
	I1213 10:53:26.878274  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.878281  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:26.878288  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:26.878343  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:26.905040  396441 cri.go:89] found id: ""
	I1213 10:53:26.905054  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.905061  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:26.905067  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:26.905140  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:26.933587  396441 cri.go:89] found id: ""
	I1213 10:53:26.933601  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.933608  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:26.933613  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:26.933669  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:26.958154  396441 cri.go:89] found id: ""
	I1213 10:53:26.958167  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.958175  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:26.958180  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:26.958240  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:26.986142  396441 cri.go:89] found id: ""
	I1213 10:53:26.986156  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.986164  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:26.986169  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:26.986222  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:27.013602  396441 cri.go:89] found id: ""
	I1213 10:53:27.013617  396441 logs.go:282] 0 containers: []
	W1213 10:53:27.013625  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:27.013633  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:27.013643  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:27.080830  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:27.080850  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:27.109824  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:27.109839  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:27.175975  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:27.176002  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:27.190437  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:27.190456  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:27.254921  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:27.245674   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.246416   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.248026   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.248660   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.250260   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:27.245674   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.246416   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.248026   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.248660   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.250260   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:29.755755  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:29.767564  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:29.767645  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:29.797908  396441 cri.go:89] found id: ""
	I1213 10:53:29.797922  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.797929  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:29.797935  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:29.797994  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:29.824494  396441 cri.go:89] found id: ""
	I1213 10:53:29.824508  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.824516  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:29.824521  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:29.824577  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:29.853869  396441 cri.go:89] found id: ""
	I1213 10:53:29.853883  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.853890  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:29.853895  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:29.853951  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:29.883491  396441 cri.go:89] found id: ""
	I1213 10:53:29.883504  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.883526  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:29.883531  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:29.883590  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:29.908921  396441 cri.go:89] found id: ""
	I1213 10:53:29.908935  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.908943  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:29.908948  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:29.909004  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:29.938464  396441 cri.go:89] found id: ""
	I1213 10:53:29.938478  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.938485  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:29.938490  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:29.938568  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:29.964642  396441 cri.go:89] found id: ""
	I1213 10:53:29.964658  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.964665  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:29.964672  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:29.964682  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:30.032663  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:30.032688  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:30.050167  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:30.050188  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:30.119376  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:30.110113   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.110970   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.112364   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.113033   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.114675   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:30.110113   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.110970   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.112364   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.113033   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.114675   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:30.119387  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:30.119398  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:30.188285  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:30.188307  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:32.723464  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:32.734250  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:32.734319  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:32.760154  396441 cri.go:89] found id: ""
	I1213 10:53:32.760168  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.760175  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:32.760180  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:32.760237  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:32.788893  396441 cri.go:89] found id: ""
	I1213 10:53:32.788906  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.788913  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:32.788918  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:32.788973  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:32.815801  396441 cri.go:89] found id: ""
	I1213 10:53:32.815815  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.815822  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:32.815827  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:32.815884  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:32.840740  396441 cri.go:89] found id: ""
	I1213 10:53:32.840754  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.840761  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:32.840766  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:32.840820  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:32.865881  396441 cri.go:89] found id: ""
	I1213 10:53:32.865895  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.865902  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:32.865907  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:32.865962  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:32.891687  396441 cri.go:89] found id: ""
	I1213 10:53:32.891702  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.891709  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:32.891714  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:32.891768  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:32.918219  396441 cri.go:89] found id: ""
	I1213 10:53:32.918233  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.918240  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:32.918248  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:32.918271  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:32.982730  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:32.974018   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.974750   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.976353   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.976815   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.978478   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:32.974018   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.974750   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.976353   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.976815   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.978478   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:32.982749  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:32.982759  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:33.055443  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:33.055464  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:33.092574  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:33.092592  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:33.159246  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:33.159268  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:35.674110  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:35.683841  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:35.683897  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:35.708708  396441 cri.go:89] found id: ""
	I1213 10:53:35.708722  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.708729  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:35.708735  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:35.708792  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:35.733638  396441 cri.go:89] found id: ""
	I1213 10:53:35.733652  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.733659  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:35.733665  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:35.733725  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:35.759232  396441 cri.go:89] found id: ""
	I1213 10:53:35.759246  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.759254  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:35.759259  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:35.759318  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:35.787542  396441 cri.go:89] found id: ""
	I1213 10:53:35.787557  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.787564  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:35.787569  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:35.787625  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:35.811703  396441 cri.go:89] found id: ""
	I1213 10:53:35.811716  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.811724  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:35.811729  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:35.811786  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:35.837035  396441 cri.go:89] found id: ""
	I1213 10:53:35.837049  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.837057  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:35.837062  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:35.837121  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:35.863392  396441 cri.go:89] found id: ""
	I1213 10:53:35.863406  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.863414  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:35.863421  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:35.863431  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:35.928750  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:35.928771  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:35.943680  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:35.943696  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:36.014992  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:36.001506   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.002280   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.004784   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.005213   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.007095   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:36.001506   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.002280   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.004784   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.005213   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.007095   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:36.015006  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:36.015018  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:36.088705  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:36.088726  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:38.618865  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:38.628567  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:38.628627  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:38.657828  396441 cri.go:89] found id: ""
	I1213 10:53:38.657842  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.657853  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:38.657859  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:38.657916  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:38.686067  396441 cri.go:89] found id: ""
	I1213 10:53:38.686081  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.686088  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:38.686093  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:38.686148  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:38.723682  396441 cri.go:89] found id: ""
	I1213 10:53:38.723696  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.723703  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:38.723709  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:38.723764  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:38.749537  396441 cri.go:89] found id: ""
	I1213 10:53:38.749552  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.749559  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:38.749564  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:38.749617  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:38.774109  396441 cri.go:89] found id: ""
	I1213 10:53:38.774129  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.774136  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:38.774141  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:38.774198  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:38.799225  396441 cri.go:89] found id: ""
	I1213 10:53:38.799239  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.799263  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:38.799269  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:38.799323  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:38.828154  396441 cri.go:89] found id: ""
	I1213 10:53:38.828168  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.828176  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:38.828183  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:38.828192  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:38.892547  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:38.892565  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:38.907245  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:38.907267  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:38.971825  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:38.963507   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.964137   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.965780   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.966348   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.968042   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:38.963507   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.964137   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.965780   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.966348   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.968042   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:38.971835  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:38.971847  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:39.041005  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:39.041026  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:41.575691  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:41.585703  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:41.585767  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:41.611468  396441 cri.go:89] found id: ""
	I1213 10:53:41.611482  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.611490  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:41.611495  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:41.611582  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:41.637775  396441 cri.go:89] found id: ""
	I1213 10:53:41.637790  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.637797  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:41.637802  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:41.637865  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:41.666669  396441 cri.go:89] found id: ""
	I1213 10:53:41.666683  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.666691  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:41.666696  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:41.666750  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:41.691305  396441 cri.go:89] found id: ""
	I1213 10:53:41.691328  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.691336  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:41.691341  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:41.691403  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:41.716485  396441 cri.go:89] found id: ""
	I1213 10:53:41.716506  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.716514  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:41.716519  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:41.716576  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:41.745432  396441 cri.go:89] found id: ""
	I1213 10:53:41.745446  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.745453  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:41.745458  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:41.745515  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:41.770118  396441 cri.go:89] found id: ""
	I1213 10:53:41.770131  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.770138  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:41.770156  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:41.770165  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:41.799454  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:41.799470  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:41.863838  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:41.863858  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:41.878805  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:41.878821  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:41.944990  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:41.935691   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.936395   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.938023   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.938699   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.940322   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:41.935691   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.936395   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.938023   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.938699   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.940322   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:41.945000  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:41.945011  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:44.513654  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:44.523863  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:44.523923  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:44.556878  396441 cri.go:89] found id: ""
	I1213 10:53:44.556891  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.556912  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:44.556917  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:44.556984  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:44.592098  396441 cri.go:89] found id: ""
	I1213 10:53:44.592111  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.592128  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:44.592133  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:44.592200  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:44.620862  396441 cri.go:89] found id: ""
	I1213 10:53:44.620875  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.620883  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:44.620898  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:44.620965  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:44.652601  396441 cri.go:89] found id: ""
	I1213 10:53:44.652615  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.652622  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:44.652627  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:44.652683  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:44.678239  396441 cri.go:89] found id: ""
	I1213 10:53:44.678253  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.678269  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:44.678275  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:44.678340  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:44.703917  396441 cri.go:89] found id: ""
	I1213 10:53:44.703930  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.703938  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:44.703943  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:44.704002  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:44.730484  396441 cri.go:89] found id: ""
	I1213 10:53:44.730497  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.730505  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:44.730523  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:44.730538  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:44.744828  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:44.744844  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:44.809441  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:44.801057   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.801582   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.803183   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.803696   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.805516   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:44.801057   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.801582   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.803183   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.803696   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.805516   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:44.809451  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:44.809463  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:44.877771  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:44.877793  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:44.911088  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:44.911103  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:47.481207  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:47.491256  396441 kubeadm.go:602] duration metric: took 4m3.474830683s to restartPrimaryControlPlane
	W1213 10:53:47.491316  396441 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 10:53:47.491392  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 10:53:47.914152  396441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:53:47.926543  396441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:53:47.934327  396441 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:53:47.934378  396441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:53:47.941688  396441 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:53:47.941697  396441 kubeadm.go:158] found existing configuration files:
	
	I1213 10:53:47.941743  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:53:47.949173  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:53:47.949232  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:53:47.956350  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:53:47.963878  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:53:47.963941  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:53:47.971122  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:53:47.978729  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:53:47.978780  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:53:47.985856  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:53:47.993466  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:53:47.993519  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:53:48.001100  396441 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:53:48.045742  396441 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:53:48.045801  396441 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:53:48.119066  396441 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:53:48.119144  396441 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:53:48.119191  396441 kubeadm.go:319] OS: Linux
	I1213 10:53:48.119235  396441 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:53:48.119293  396441 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:53:48.119348  396441 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:53:48.119396  396441 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:53:48.119453  396441 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:53:48.119544  396441 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:53:48.119589  396441 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:53:48.119648  396441 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:53:48.119703  396441 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:53:48.191760  396441 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:53:48.191864  396441 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:53:48.191953  396441 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:53:48.199827  396441 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:53:48.203364  396441 out.go:252]   - Generating certificates and keys ...
	I1213 10:53:48.203457  396441 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:53:48.203575  396441 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:53:48.203646  396441 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:53:48.203710  396441 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:53:48.203925  396441 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:53:48.203983  396441 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:53:48.204042  396441 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:53:48.204098  396441 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:53:48.204167  396441 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:53:48.204241  396441 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:53:48.204278  396441 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:53:48.204329  396441 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:53:48.358581  396441 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:53:48.732777  396441 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:53:49.132208  396441 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:53:49.321084  396441 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:53:49.412268  396441 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:53:49.412908  396441 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:53:49.417021  396441 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:53:49.420254  396441 out.go:252]   - Booting up control plane ...
	I1213 10:53:49.420359  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:53:49.420477  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:53:49.421364  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:53:49.437192  396441 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:53:49.437314  396441 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:53:49.445560  396441 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:53:49.445850  396441 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:53:49.446065  396441 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:53:49.579988  396441 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:53:49.580095  396441 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:57:49.575955  396441 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000564023s
	I1213 10:57:49.575972  396441 kubeadm.go:319] 
	I1213 10:57:49.576025  396441 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:57:49.576055  396441 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:57:49.576153  396441 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:57:49.576156  396441 kubeadm.go:319] 
	I1213 10:57:49.576253  396441 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:57:49.576282  396441 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:57:49.576311  396441 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:57:49.576314  396441 kubeadm.go:319] 
	I1213 10:57:49.584496  396441 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:57:49.584979  396441 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:57:49.585109  396441 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:57:49.585360  396441 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:57:49.585367  396441 kubeadm.go:319] 
	I1213 10:57:49.585449  396441 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 10:57:49.585544  396441 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000564023s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 10:57:49.585636  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 10:57:50.015805  396441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:57:50.030733  396441 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:57:50.030794  396441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:57:50.040503  396441 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:57:50.040514  396441 kubeadm.go:158] found existing configuration files:
	
	I1213 10:57:50.040573  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:57:50.049098  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:57:50.049158  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:57:50.057150  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:57:50.066557  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:57:50.066659  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:57:50.074920  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:57:50.083448  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:57:50.083507  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:57:50.092213  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:57:50.100606  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:57:50.100667  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:57:50.108705  396441 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:57:50.150598  396441 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:57:50.150922  396441 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:57:50.222346  396441 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:57:50.222407  396441 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:57:50.222441  396441 kubeadm.go:319] OS: Linux
	I1213 10:57:50.222482  396441 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:57:50.222526  396441 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:57:50.222570  396441 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:57:50.222621  396441 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:57:50.222666  396441 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:57:50.222718  396441 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:57:50.222760  396441 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:57:50.222804  396441 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:57:50.222847  396441 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:57:50.290176  396441 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:57:50.290279  396441 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:57:50.290370  396441 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:57:50.297738  396441 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:57:50.303127  396441 out.go:252]   - Generating certificates and keys ...
	I1213 10:57:50.303239  396441 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:57:50.303307  396441 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:57:50.303384  396441 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:57:50.303444  396441 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:57:50.303589  396441 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:57:50.303642  396441 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:57:50.303705  396441 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:57:50.303769  396441 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:57:50.303843  396441 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:57:50.303915  396441 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:57:50.303952  396441 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:57:50.304007  396441 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:57:50.552022  396441 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:57:50.900706  396441 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:57:50.944600  396441 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:57:51.426451  396441 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:57:51.746824  396441 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:57:51.747542  396441 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:57:51.750376  396441 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:57:51.753437  396441 out.go:252]   - Booting up control plane ...
	I1213 10:57:51.753548  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:57:51.753629  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:57:51.754233  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:57:51.768926  396441 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:57:51.769192  396441 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:57:51.780537  396441 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:57:51.780629  396441 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:57:51.780668  396441 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:57:51.907080  396441 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:57:51.907187  396441 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:01:51.907939  396441 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001143765s
	I1213 11:01:51.907957  396441 kubeadm.go:319] 
	I1213 11:01:51.908010  396441 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:01:51.908040  396441 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:01:51.908138  396441 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:01:51.908141  396441 kubeadm.go:319] 
	I1213 11:01:51.908238  396441 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:01:51.908267  396441 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:01:51.908295  396441 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:01:51.908298  396441 kubeadm.go:319] 
	I1213 11:01:51.911942  396441 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:01:51.912375  396441 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:01:51.912489  396441 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:01:51.912750  396441 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:01:51.912759  396441 kubeadm.go:319] 
	I1213 11:01:51.912853  396441 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:01:51.912889  396441 kubeadm.go:403] duration metric: took 12m7.937442674s to StartCluster
	I1213 11:01:51.912920  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:01:51.912979  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:01:51.938530  396441 cri.go:89] found id: ""
	I1213 11:01:51.938545  396441 logs.go:282] 0 containers: []
	W1213 11:01:51.938552  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:01:51.938558  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:01:51.938614  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:01:51.963977  396441 cri.go:89] found id: ""
	I1213 11:01:51.963991  396441 logs.go:282] 0 containers: []
	W1213 11:01:51.963998  396441 logs.go:284] No container was found matching "etcd"
	I1213 11:01:51.964003  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:01:51.964062  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:01:51.988936  396441 cri.go:89] found id: ""
	I1213 11:01:51.988951  396441 logs.go:282] 0 containers: []
	W1213 11:01:51.988958  396441 logs.go:284] No container was found matching "coredns"
	I1213 11:01:51.988963  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:01:51.989016  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:01:52.019417  396441 cri.go:89] found id: ""
	I1213 11:01:52.019431  396441 logs.go:282] 0 containers: []
	W1213 11:01:52.019439  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:01:52.019444  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:01:52.019504  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:01:52.046337  396441 cri.go:89] found id: ""
	I1213 11:01:52.046352  396441 logs.go:282] 0 containers: []
	W1213 11:01:52.046360  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:01:52.046365  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:01:52.046426  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:01:52.072247  396441 cri.go:89] found id: ""
	I1213 11:01:52.072261  396441 logs.go:282] 0 containers: []
	W1213 11:01:52.072269  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:01:52.072274  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:01:52.072335  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:01:52.098208  396441 cri.go:89] found id: ""
	I1213 11:01:52.098222  396441 logs.go:282] 0 containers: []
	W1213 11:01:52.098230  396441 logs.go:284] No container was found matching "kindnet"
	I1213 11:01:52.098238  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 11:01:52.098248  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:01:52.165245  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 11:01:52.165265  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:01:52.179908  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:01:52.179924  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:01:52.245950  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:01:52.237532   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.238206   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.239883   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.240475   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.242071   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:01:52.237532   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.238206   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.239883   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.240475   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.242071   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:01:52.245965  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:01:52.245974  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:01:52.322777  396441 logs.go:123] Gathering logs for container status ...
	I1213 11:01:52.322795  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 11:01:52.353497  396441 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001143765s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 11:01:52.353528  396441 out.go:285] * 
	W1213 11:01:52.353591  396441 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001143765s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:01:52.353607  396441 out.go:285] * 
	W1213 11:01:52.355785  396441 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:01:52.362615  396441 out.go:203] 
	W1213 11:01:52.366304  396441 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001143765s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:01:52.366353  396441 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 11:01:52.366376  396441 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 11:01:52.369563  396441 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.43259327Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432628568Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432669931Z" level=info msg="Create NRI interface"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432773423Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432782531Z" level=info msg="runtime interface created"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432793805Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432800656Z" level=info msg="runtime interface starting up..."
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432807844Z" level=info msg="starting plugins..."
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432820907Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432883414Z" level=info msg="No systemd watchdog enabled"
	Dec 13 10:49:42 functional-407525 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.19567159Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=c8401471-cf55-4e91-8c5f-25a7803eeff9 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.1966268Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=72a9b02f-646a-4554-ae9a-9e3da3b7ad0c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.197123888Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=9caf3dbd-ac4b-4ee0-a136-15962b2eeea0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.197584529Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=86fa4638-cc37-45ef-b1b9-31efae43690d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.198007073Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=37f9bdfd-077a-4751-a897-e7c971db1d6b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.198454331Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=f02d4db1-79bc-4d79-9072-497dd5c75d43 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.198871681Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=a0158e10-bee2-405d-9643-45512681023c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.293525942Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3fa6c343-c4b6-41b8-a772-00d9ff9f481b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.294225272Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=f29d3de7-c9c2-4c34-9a76-76647c28c359 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.294692649Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=115a2b32-9e68-43c7-90af-1d4450976368 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.295176544Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=cce5b0a2-af51-4974-8c4f-26d3aadd70cb name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.295829785Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=bba9558c-4301-4576-890b-64bddc5af9b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.296320695Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=59bc3a50-c36c-4024-8506-47dbb78201d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.296784429Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=97458369-23f9-4acf-a127-9b41f30c00a3 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:03:57.859805   23248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:03:57.861257   23248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:03:57.861891   23248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:03:57.863464   23248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:03:57.863950   23248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	[Dec13 10:24] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:25] overlayfs: idmapped layers are currently not supported
	[  +0.081323] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:31] overlayfs: idmapped layers are currently not supported
	[Dec13 10:32] overlayfs: idmapped layers are currently not supported
	[Dec13 10:42] hrtimer: interrupt took 21684953 ns
	[Dec13 10:49] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 11:03:57 up  2:46,  0 user,  load average: 0.28, 0.20, 0.40
	Linux functional-407525 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 11:03:55 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:03:55 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1126.
	Dec 13 11:03:55 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:03:55 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:03:56 functional-407525 kubelet[23104]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:03:56 functional-407525 kubelet[23104]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:03:56 functional-407525 kubelet[23104]: E1213 11:03:56.033403   23104 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:03:56 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:03:56 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:03:56 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1127.
	Dec 13 11:03:56 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:03:56 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:03:56 functional-407525 kubelet[23142]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:03:56 functional-407525 kubelet[23142]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:03:56 functional-407525 kubelet[23142]: E1213 11:03:56.852167   23142 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:03:56 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:03:56 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:03:57 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1128.
	Dec 13 11:03:57 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:03:57 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:03:57 functional-407525 kubelet[23170]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:03:57 functional-407525 kubelet[23170]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:03:57 functional-407525 kubelet[23170]: E1213 11:03:57.578319   23170 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:03:57 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:03:57 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525: exit status 2 (368.230303ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-407525" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-407525 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-407525 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (61.597618ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-407525 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-407525 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-407525 describe po hello-node-connect: exit status 1 (61.396155ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-407525 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-407525 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-407525 logs -l app=hello-node-connect: exit status 1 (58.087288ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-407525 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-407525 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-407525 describe svc hello-node-connect: exit status 1 (70.341364ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-407525 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-407525
helpers_test.go:244: (dbg) docker inspect functional-407525:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	        "Created": "2025-12-13T10:34:59.162458661Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 385126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:34:59.230276401Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hostname",
	        "HostsPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hosts",
	        "LogPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7-json.log",
	        "Name": "/functional-407525",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-407525:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-407525",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	                "LowerDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-407525",
	                "Source": "/var/lib/docker/volumes/functional-407525/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-407525",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-407525",
	                "name.minikube.sigs.k8s.io": "functional-407525",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb8c72e3de62f4751cebe2c5a489ec3040a7f771c4c912b4414d5eb26c67d8e4",
	            "SandboxKey": "/var/run/docker/netns/fb8c72e3de62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-407525": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:c5:1d:c8:5d:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8bb3fce07852261971da0e26f4e28c90471b6da820443a0b657c0bf09d2f7042",
	                    "EndpointID": "3a907b06ccc449fc18f0cf71710374046514d7011757e3e81bb1c73b267fe8c9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-407525",
	                        "7fc3d6bd328a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525: exit status 2 (354.050133ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-407525 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │                     │
	│ cache   │ functional-407525 cache reload                                                                           │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ ssh     │ functional-407525 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                         │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                      │ minikube          │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │ 13 Dec 25 10:49 UTC │
	│ kubectl │ functional-407525 kubectl -- --context functional-407525 get pods                                        │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │                     │
	│ start   │ -p functional-407525 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 10:49 UTC │                     │
	│ cp      │ functional-407525 cp testdata/cp-test.txt /home/docker/cp-test.txt                                       │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:01 UTC │ 13 Dec 25 11:01 UTC │
	│ config  │ functional-407525 config unset cpus                                                                      │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:01 UTC │ 13 Dec 25 11:01 UTC │
	│ config  │ functional-407525 config get cpus                                                                        │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:01 UTC │                     │
	│ config  │ functional-407525 config set cpus 2                                                                      │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:01 UTC │ 13 Dec 25 11:01 UTC │
	│ config  │ functional-407525 config get cpus                                                                        │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:01 UTC │ 13 Dec 25 11:01 UTC │
	│ config  │ functional-407525 config unset cpus                                                                      │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:01 UTC │ 13 Dec 25 11:01 UTC │
	│ ssh     │ functional-407525 ssh -n functional-407525 sudo cat /home/docker/cp-test.txt                             │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:01 UTC │ 13 Dec 25 11:01 UTC │
	│ config  │ functional-407525 config get cpus                                                                        │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:01 UTC │                     │
	│ ssh     │ functional-407525 ssh echo hello                                                                         │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:01 UTC │ 13 Dec 25 11:01 UTC │
	│ ssh     │ functional-407525 ssh cat /etc/hostname                                                                  │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:01 UTC │ 13 Dec 25 11:01 UTC │
	│ ssh     │ functional-407525 ssh -n functional-407525 sudo cat /home/docker/cp-test.txt                             │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:01 UTC │ 13 Dec 25 11:02 UTC │
	│ tunnel  │ functional-407525 tunnel --alsologtostderr                                                               │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:01 UTC │                     │
	│ tunnel  │ functional-407525 tunnel --alsologtostderr                                                               │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:01 UTC │                     │
	│ cp      │ functional-407525 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:02 UTC │ 13 Dec 25 11:02 UTC │
	│ tunnel  │ functional-407525 tunnel --alsologtostderr                                                               │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:02 UTC │                     │
	│ ssh     │ functional-407525 ssh -n functional-407525 sudo cat /tmp/does/not/exist/cp-test.txt                      │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:02 UTC │ 13 Dec 25 11:02 UTC │
	│ addons  │ functional-407525 addons list                                                                            │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ addons  │ functional-407525 addons list -o json                                                                    │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:49:39
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:49:39.014629  396441 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:49:39.014755  396441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:49:39.014760  396441 out.go:374] Setting ErrFile to fd 2...
	I1213 10:49:39.014764  396441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:49:39.015052  396441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:49:39.015432  396441 out.go:368] Setting JSON to false
	I1213 10:49:39.016356  396441 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9131,"bootTime":1765613848,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:49:39.016423  396441 start.go:143] virtualization:  
	I1213 10:49:39.019850  396441 out.go:179] * [functional-407525] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:49:39.022886  396441 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:49:39.022964  396441 notify.go:221] Checking for updates...
	I1213 10:49:39.029514  396441 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:49:39.032457  396441 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:49:39.035302  396441 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 10:49:39.038191  396441 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:49:39.041178  396441 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:49:39.044626  396441 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:49:39.044735  396441 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:49:39.073132  396441 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:49:39.073240  396441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:49:39.131952  396441 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:49:39.12226015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:49:39.132042  396441 docker.go:319] overlay module found
	I1213 10:49:39.135181  396441 out.go:179] * Using the docker driver based on existing profile
	I1213 10:49:39.138004  396441 start.go:309] selected driver: docker
	I1213 10:49:39.138012  396441 start.go:927] validating driver "docker" against &{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:49:39.138117  396441 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:49:39.138218  396441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:49:39.201683  396441 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-13 10:49:39.192871513 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:49:39.202106  396441 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 10:49:39.202131  396441 cni.go:84] Creating CNI manager for ""
	I1213 10:49:39.202182  396441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:49:39.202230  396441 start.go:353] cluster config:
	{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:49:39.205440  396441 out.go:179] * Starting "functional-407525" primary control-plane node in "functional-407525" cluster
	I1213 10:49:39.208563  396441 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:49:39.211465  396441 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:49:39.214245  396441 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:49:39.214282  396441 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:49:39.214290  396441 cache.go:65] Caching tarball of preloaded images
	I1213 10:49:39.214340  396441 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:49:39.214371  396441 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 10:49:39.214379  396441 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 10:49:39.214508  396441 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/config.json ...
	I1213 10:49:39.233590  396441 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 10:49:39.233607  396441 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 10:49:39.233619  396441 cache.go:243] Successfully downloaded all kic artifacts
	I1213 10:49:39.233649  396441 start.go:360] acquireMachinesLock for functional-407525: {Name:mkb9a6ddeb0e93e626919e03dc3c989f045e07da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 10:49:39.233703  396441 start.go:364] duration metric: took 38.187µs to acquireMachinesLock for "functional-407525"
	I1213 10:49:39.233721  396441 start.go:96] Skipping create...Using existing machine configuration
	I1213 10:49:39.233725  396441 fix.go:54] fixHost starting: 
	I1213 10:49:39.234003  396441 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
	I1213 10:49:39.250771  396441 fix.go:112] recreateIfNeeded on functional-407525: state=Running err=<nil>
	W1213 10:49:39.250790  396441 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 10:49:39.253977  396441 out.go:252] * Updating the running docker "functional-407525" container ...
	I1213 10:49:39.254007  396441 machine.go:94] provisionDockerMachine start ...
	I1213 10:49:39.254089  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:39.270672  396441 main.go:143] libmachine: Using SSH client type: native
	I1213 10:49:39.270992  396441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:49:39.270998  396441 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 10:49:39.419071  396441 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-407525
	
	I1213 10:49:39.419086  396441 ubuntu.go:182] provisioning hostname "functional-407525"
	I1213 10:49:39.419147  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:39.437001  396441 main.go:143] libmachine: Using SSH client type: native
	I1213 10:49:39.437302  396441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:49:39.437311  396441 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-407525 && echo "functional-407525" | sudo tee /etc/hostname
	I1213 10:49:39.596975  396441 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-407525
	
	I1213 10:49:39.597049  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:39.614748  396441 main.go:143] libmachine: Using SSH client type: native
	I1213 10:49:39.615049  396441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:49:39.615063  396441 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-407525' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-407525/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-407525' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 10:49:39.763894  396441 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 10:49:39.763910  396441 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 10:49:39.763930  396441 ubuntu.go:190] setting up certificates
	I1213 10:49:39.763939  396441 provision.go:84] configureAuth start
	I1213 10:49:39.763997  396441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-407525
	I1213 10:49:39.782226  396441 provision.go:143] copyHostCerts
	I1213 10:49:39.782297  396441 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 10:49:39.782308  396441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 10:49:39.782382  396441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 10:49:39.782470  396441 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 10:49:39.782473  396441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 10:49:39.782511  396441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 10:49:39.782561  396441 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 10:49:39.782565  396441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 10:49:39.782587  396441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 10:49:39.782630  396441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.functional-407525 san=[127.0.0.1 192.168.49.2 functional-407525 localhost minikube]
	I1213 10:49:40.264423  396441 provision.go:177] copyRemoteCerts
	I1213 10:49:40.264477  396441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 10:49:40.264518  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:40.288593  396441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:49:40.395503  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 10:49:40.413777  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 10:49:40.432071  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 10:49:40.449556  396441 provision.go:87] duration metric: took 685.604236ms to configureAuth
	I1213 10:49:40.449573  396441 ubuntu.go:206] setting minikube options for container-runtime
	I1213 10:49:40.449767  396441 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 10:49:40.449873  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:40.466720  396441 main.go:143] libmachine: Using SSH client type: native
	I1213 10:49:40.467023  396441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1213 10:49:40.467036  396441 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 10:49:40.812989  396441 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 10:49:40.813002  396441 machine.go:97] duration metric: took 1.558987505s to provisionDockerMachine
	I1213 10:49:40.813012  396441 start.go:293] postStartSetup for "functional-407525" (driver="docker")
	I1213 10:49:40.813024  396441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 10:49:40.813085  396441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 10:49:40.813128  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:40.831095  396441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:49:40.935727  396441 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 10:49:40.939068  396441 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 10:49:40.939087  396441 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 10:49:40.939096  396441 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 10:49:40.939151  396441 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 10:49:40.939232  396441 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 10:49:40.939303  396441 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts -> hosts in /etc/test/nested/copy/356328
	I1213 10:49:40.939344  396441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/356328
	I1213 10:49:40.947101  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 10:49:40.964732  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts --> /etc/test/nested/copy/356328/hosts (40 bytes)
	I1213 10:49:40.981668  396441 start.go:296] duration metric: took 168.641746ms for postStartSetup
	I1213 10:49:40.981767  396441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 10:49:40.981804  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:41.001302  396441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:49:41.104610  396441 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 10:49:41.109266  396441 fix.go:56] duration metric: took 1.875532342s for fixHost
	I1213 10:49:41.109282  396441 start.go:83] releasing machines lock for "functional-407525", held for 1.875571571s
	I1213 10:49:41.109349  396441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-407525
	I1213 10:49:41.125841  396441 ssh_runner.go:195] Run: cat /version.json
	I1213 10:49:41.125888  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:41.126157  396441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 10:49:41.126214  396441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
	I1213 10:49:41.148984  396441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:49:41.157093  396441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
	I1213 10:49:41.349053  396441 ssh_runner.go:195] Run: systemctl --version
	I1213 10:49:41.355137  396441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 10:49:41.394464  396441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 10:49:41.399282  396441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 10:49:41.399342  396441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 10:49:41.407074  396441 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 10:49:41.407089  396441 start.go:496] detecting cgroup driver to use...
	I1213 10:49:41.407118  396441 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 10:49:41.407177  396441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 10:49:41.422248  396441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 10:49:41.434814  396441 docker.go:218] disabling cri-docker service (if available) ...
	I1213 10:49:41.434866  396441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 10:49:41.450404  396441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 10:49:41.463493  396441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 10:49:41.587216  396441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 10:49:41.708085  396441 docker.go:234] disabling docker service ...
	I1213 10:49:41.708178  396441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 10:49:41.726011  396441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 10:49:41.739486  396441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 10:49:41.858015  396441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 10:49:41.976835  396441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 10:49:41.990126  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 10:49:42.004186  396441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 10:49:42.004281  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.015561  396441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 10:49:42.015636  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.026721  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.037311  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.047280  396441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 10:49:42.056517  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.067880  396441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.078430  396441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 10:49:42.089815  396441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 10:49:42.100093  396441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 10:49:42.110006  396441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:49:42.245156  396441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 10:49:42.438084  396441 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 10:49:42.438159  396441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 10:49:42.442010  396441 start.go:564] Will wait 60s for crictl version
	I1213 10:49:42.442064  396441 ssh_runner.go:195] Run: which crictl
	I1213 10:49:42.445629  396441 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 10:49:42.469110  396441 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 10:49:42.469189  396441 ssh_runner.go:195] Run: crio --version
	I1213 10:49:42.498052  396441 ssh_runner.go:195] Run: crio --version
	I1213 10:49:42.536633  396441 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 10:49:42.539603  396441 cli_runner.go:164] Run: docker network inspect functional-407525 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 10:49:42.571469  396441 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 10:49:42.578474  396441 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1213 10:49:42.582400  396441 kubeadm.go:884] updating cluster {Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 10:49:42.582534  396441 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 10:49:42.582601  396441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:49:42.622515  396441 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:49:42.622526  396441 crio.go:433] Images already preloaded, skipping extraction
	I1213 10:49:42.622581  396441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 10:49:42.647505  396441 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 10:49:42.647532  396441 cache_images.go:86] Images are preloaded, skipping loading
	I1213 10:49:42.647540  396441 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1213 10:49:42.647645  396441 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-407525 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 10:49:42.647723  396441 ssh_runner.go:195] Run: crio config
	I1213 10:49:42.707356  396441 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1213 10:49:42.707414  396441 cni.go:84] Creating CNI manager for ""
	I1213 10:49:42.707422  396441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:49:42.707430  396441 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 10:49:42.707452  396441 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-407525 NodeName:functional-407525 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 10:49:42.707613  396441 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-407525"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 10:49:42.707687  396441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 10:49:42.715307  396441 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 10:49:42.715378  396441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 10:49:42.722969  396441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 10:49:42.735593  396441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 10:49:42.747933  396441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1213 10:49:42.760993  396441 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 10:49:42.765274  396441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 10:49:42.881089  396441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 10:49:43.272837  396441 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525 for IP: 192.168.49.2
	I1213 10:49:43.272850  396441 certs.go:195] generating shared ca certs ...
	I1213 10:49:43.272866  396441 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:49:43.273008  396441 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 10:49:43.273053  396441 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 10:49:43.273060  396441 certs.go:257] generating profile certs ...
	I1213 10:49:43.273166  396441 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.key
	I1213 10:49:43.273224  396441 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key.2185ee04
	I1213 10:49:43.273264  396441 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key
	I1213 10:49:43.273384  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 10:49:43.273414  396441 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 10:49:43.273421  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 10:49:43.273447  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 10:49:43.273476  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 10:49:43.273501  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 10:49:43.273543  396441 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 10:49:43.274189  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 10:49:43.293217  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 10:49:43.313563  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 10:49:43.332800  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 10:49:43.356461  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 10:49:43.375598  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 10:49:43.393764  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 10:49:43.411407  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 10:49:43.429560  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 10:49:43.447014  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 10:49:43.465017  396441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 10:49:43.483101  396441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 10:49:43.496527  396441 ssh_runner.go:195] Run: openssl version
	I1213 10:49:43.502994  396441 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 10:49:43.510763  396441 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 10:49:43.518540  396441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 10:49:43.522603  396441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 10:49:43.522661  396441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 10:49:43.566464  396441 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 10:49:43.574093  396441 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:49:43.581656  396441 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 10:49:43.589363  396441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:49:43.593193  396441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:49:43.593258  396441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 10:49:43.634480  396441 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 10:49:43.641940  396441 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 10:49:43.649200  396441 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 10:49:43.656832  396441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 10:49:43.660735  396441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 10:49:43.660790  396441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 10:49:43.706761  396441 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 10:49:43.714203  396441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 10:49:43.718007  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 10:49:43.761049  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 10:49:43.803978  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 10:49:43.847848  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 10:49:43.889404  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 10:49:43.931127  396441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 10:49:43.975457  396441 kubeadm.go:401] StartCluster: {Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:49:43.975563  396441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 10:49:43.975628  396441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:49:44.005477  396441 cri.go:89] found id: ""
	I1213 10:49:44.005555  396441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 10:49:44.016406  396441 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 10:49:44.016416  396441 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 10:49:44.016469  396441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 10:49:44.028094  396441 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:49:44.028621  396441 kubeconfig.go:125] found "functional-407525" server: "https://192.168.49.2:8441"
	I1213 10:49:44.029882  396441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 10:49:44.039549  396441 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 10:35:07.660360228 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 10:49:42.756829139 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1213 10:49:44.039559  396441 kubeadm.go:1161] stopping kube-system containers ...
	I1213 10:49:44.039569  396441 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 10:49:44.039622  396441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 10:49:44.076693  396441 cri.go:89] found id: ""
	I1213 10:49:44.076751  396441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 10:49:44.096721  396441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:49:44.104663  396441 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 13 10:39 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 13 10:39 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Dec 13 10:39 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 13 10:39 /etc/kubernetes/scheduler.conf
	
	I1213 10:49:44.104731  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:49:44.112473  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:49:44.119938  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:49:44.119996  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:49:44.127386  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:49:44.135062  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:49:44.135113  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:49:44.142352  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:49:44.150087  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 10:49:44.150140  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:49:44.157689  396441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:49:44.166075  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:49:44.211012  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:49:46.340316  396441 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.129279793s)
	I1213 10:49:46.340374  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:49:46.548065  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:49:46.621630  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 10:49:46.676051  396441 api_server.go:52] waiting for apiserver process to appear ...
	I1213 10:49:46.676117  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:47.176335  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:47.676600  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:48.176220  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:48.676514  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:49.177109  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:49.677029  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:50.176294  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:50.676405  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:51.176207  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:51.677115  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:52.176309  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:52.676843  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:53.176518  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:53.677139  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:54.176272  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:54.677116  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:55.176949  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:55.677027  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:56.176855  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:56.677287  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:57.176985  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:57.676291  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:58.176321  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:58.676311  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:59.177074  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:49:59.676498  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:00.177244  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:00.676377  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:01.176944  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:01.676370  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:02.176565  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:02.676374  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:03.176325  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:03.677205  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:04.177202  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:04.676995  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:05.176541  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:05.676768  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:06.176328  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:06.676318  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:07.176298  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:07.676607  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:08.176977  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:08.676972  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:09.176754  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:09.676315  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:10.176824  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:10.676204  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:11.177281  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:11.676341  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:12.176307  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:12.677058  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:13.176868  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:13.676294  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:14.176196  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:14.676345  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:15.176220  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:15.676507  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:16.177216  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:16.676814  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:17.177128  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:17.676923  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:18.177103  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:18.677241  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:19.176631  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:19.676250  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:20.177039  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:20.676330  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:21.176991  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:21.676979  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:22.176310  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:22.676330  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:23.177072  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:23.676322  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:24.177240  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:24.676323  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:25.176911  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:25.677053  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:26.176471  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:26.676452  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:27.177028  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:27.676317  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:28.176975  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:28.676338  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:29.176379  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:29.676600  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:30.176351  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:30.676375  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:31.177240  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:31.677058  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:32.176843  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:32.676436  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:33.176344  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:33.677269  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:34.176296  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:34.676316  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:35.176823  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:35.676192  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:36.177128  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:36.677155  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:37.176402  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:37.676320  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:38.176310  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:38.677003  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:39.176915  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:39.676966  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:40.176371  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:40.676264  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:41.176771  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:41.676461  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:42.176264  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:42.676335  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:43.177015  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:43.676312  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:44.176383  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:44.676333  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:45.176214  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:45.676348  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:46.177104  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:46.676677  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:50:46.676771  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:50:46.701985  396441 cri.go:89] found id: ""
	I1213 10:50:46.701999  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.702006  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:50:46.702011  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:50:46.702065  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:50:46.727261  396441 cri.go:89] found id: ""
	I1213 10:50:46.727275  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.727282  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:50:46.727287  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:50:46.727352  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:50:46.756930  396441 cri.go:89] found id: ""
	I1213 10:50:46.756944  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.756952  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:50:46.756957  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:50:46.757025  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:50:46.788731  396441 cri.go:89] found id: ""
	I1213 10:50:46.788745  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.788752  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:50:46.788757  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:50:46.788810  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:50:46.816991  396441 cri.go:89] found id: ""
	I1213 10:50:46.817004  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.817012  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:50:46.817017  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:50:46.817072  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:50:46.847482  396441 cri.go:89] found id: ""
	I1213 10:50:46.847498  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.847505  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:50:46.847559  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:50:46.847628  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:50:46.872720  396441 cri.go:89] found id: ""
	I1213 10:50:46.872734  396441 logs.go:282] 0 containers: []
	W1213 10:50:46.872741  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:50:46.872749  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:50:46.872759  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:50:46.942912  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:50:46.942931  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:50:46.971862  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:50:46.971879  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:50:47.038918  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:50:47.038938  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:50:47.053895  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:50:47.053912  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:50:47.119106  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:47.111056   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.111745   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.113325   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.113616   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.115033   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:50:47.111056   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.111745   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.113325   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.113616   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:47.115033   10987 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:50:49.619370  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:49.629150  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:50:49.629213  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:50:49.658173  396441 cri.go:89] found id: ""
	I1213 10:50:49.658186  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.658194  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:50:49.658199  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:50:49.658256  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:50:49.683401  396441 cri.go:89] found id: ""
	I1213 10:50:49.683414  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.683422  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:50:49.683427  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:50:49.683484  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:50:49.708416  396441 cri.go:89] found id: ""
	I1213 10:50:49.708440  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.708448  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:50:49.708454  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:50:49.708520  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:50:49.737305  396441 cri.go:89] found id: ""
	I1213 10:50:49.737319  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.737326  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:50:49.737331  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:50:49.737385  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:50:49.761415  396441 cri.go:89] found id: ""
	I1213 10:50:49.761431  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.761438  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:50:49.761443  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:50:49.761496  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:50:49.805122  396441 cri.go:89] found id: ""
	I1213 10:50:49.805135  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.805142  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:50:49.805147  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:50:49.805205  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:50:49.846981  396441 cri.go:89] found id: ""
	I1213 10:50:49.846995  396441 logs.go:282] 0 containers: []
	W1213 10:50:49.847002  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:50:49.847010  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:50:49.847020  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:50:49.918064  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:50:49.918084  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:50:49.947649  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:50:49.947666  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:50:50.012059  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:50:50.012084  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:50:50.028985  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:50:50.029010  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:50:50.098147  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:50.089035   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.089498   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.091615   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.092842   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.093753   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:50:50.089035   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.089498   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.091615   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.092842   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:50.093753   11093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:50:52.599845  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:52.610036  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:50:52.610095  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:50:52.638582  396441 cri.go:89] found id: ""
	I1213 10:50:52.638597  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.638603  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:50:52.638608  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:50:52.638670  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:50:52.663295  396441 cri.go:89] found id: ""
	I1213 10:50:52.663308  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.663315  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:50:52.663320  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:50:52.663375  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:50:52.689168  396441 cri.go:89] found id: ""
	I1213 10:50:52.689182  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.689189  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:50:52.689194  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:50:52.689253  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:50:52.714589  396441 cri.go:89] found id: ""
	I1213 10:50:52.714602  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.714610  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:50:52.714615  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:50:52.714669  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:50:52.742324  396441 cri.go:89] found id: ""
	I1213 10:50:52.742338  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.742345  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:50:52.742363  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:50:52.742420  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:50:52.778053  396441 cri.go:89] found id: ""
	I1213 10:50:52.778067  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.778074  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:50:52.778079  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:50:52.778138  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:50:52.805632  396441 cri.go:89] found id: ""
	I1213 10:50:52.805646  396441 logs.go:282] 0 containers: []
	W1213 10:50:52.805653  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:50:52.805661  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:50:52.805671  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:50:52.875461  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:50:52.875481  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:50:52.890245  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:50:52.890261  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:50:52.957587  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:52.949597   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.950157   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.951730   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.952367   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.953817   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:50:52.949597   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.950157   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.951730   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.952367   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:52.953817   11189 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:50:52.957599  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:50:52.957612  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:50:53.025361  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:50:53.025388  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:50:55.556570  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:55.566463  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:50:55.566537  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:50:55.593903  396441 cri.go:89] found id: ""
	I1213 10:50:55.593917  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.593924  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:50:55.593929  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:50:55.593992  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:50:55.619079  396441 cri.go:89] found id: ""
	I1213 10:50:55.619093  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.619101  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:50:55.619106  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:50:55.619162  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:50:55.645916  396441 cri.go:89] found id: ""
	I1213 10:50:55.645931  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.645938  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:50:55.645943  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:50:55.646012  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:50:55.671377  396441 cri.go:89] found id: ""
	I1213 10:50:55.671397  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.671405  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:50:55.671410  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:50:55.671469  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:50:55.697872  396441 cri.go:89] found id: ""
	I1213 10:50:55.697886  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.697894  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:50:55.697917  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:50:55.697976  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:50:55.723576  396441 cri.go:89] found id: ""
	I1213 10:50:55.723589  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.723597  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:50:55.723602  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:50:55.723655  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:50:55.751256  396441 cri.go:89] found id: ""
	I1213 10:50:55.751270  396441 logs.go:282] 0 containers: []
	W1213 10:50:55.751277  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:50:55.751286  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:50:55.751296  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:50:55.821963  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:50:55.821982  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:50:55.836343  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:50:55.836357  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:50:55.903582  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:55.892408   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.895596   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.897286   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.897780   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.899369   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:50:55.892408   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.895596   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.897286   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.897780   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:55.899369   11295 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:50:55.903594  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:50:55.903605  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:50:55.975012  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:50:55.975037  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:50:58.506699  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:50:58.517103  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:50:58.517162  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:50:58.542695  396441 cri.go:89] found id: ""
	I1213 10:50:58.542717  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.542725  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:50:58.542730  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:50:58.542787  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:50:58.574075  396441 cri.go:89] found id: ""
	I1213 10:50:58.574089  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.574096  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:50:58.574101  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:50:58.574161  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:50:58.602982  396441 cri.go:89] found id: ""
	I1213 10:50:58.602997  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.603003  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:50:58.603008  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:50:58.603066  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:50:58.628158  396441 cri.go:89] found id: ""
	I1213 10:50:58.628172  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.628179  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:50:58.628185  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:50:58.628241  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:50:58.653050  396441 cri.go:89] found id: ""
	I1213 10:50:58.653064  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.653071  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:50:58.653076  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:50:58.653133  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:50:58.678853  396441 cri.go:89] found id: ""
	I1213 10:50:58.678867  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.678875  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:50:58.678880  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:50:58.678938  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:50:58.704667  396441 cri.go:89] found id: ""
	I1213 10:50:58.704681  396441 logs.go:282] 0 containers: []
	W1213 10:50:58.704689  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:50:58.704696  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:50:58.704706  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:50:58.769708  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:50:58.769731  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:50:58.786197  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:50:58.786214  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:50:58.859562  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:50:58.850377   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.851009   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.852748   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.853294   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.854974   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:50:58.850377   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.851009   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.852748   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.853294   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:50:58.854974   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:50:58.859572  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:50:58.859583  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:50:58.929132  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:50:58.929151  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:01.457488  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:01.467675  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:01.467734  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:01.494648  396441 cri.go:89] found id: ""
	I1213 10:51:01.494662  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.494669  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:01.494675  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:01.494735  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:01.524042  396441 cri.go:89] found id: ""
	I1213 10:51:01.524056  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.524062  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:01.524068  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:01.524130  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:01.550111  396441 cri.go:89] found id: ""
	I1213 10:51:01.550126  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.550133  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:01.550139  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:01.550207  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:01.579191  396441 cri.go:89] found id: ""
	I1213 10:51:01.579205  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.579213  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:01.579218  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:01.579274  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:01.606365  396441 cri.go:89] found id: ""
	I1213 10:51:01.606379  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.606387  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:01.606393  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:01.606456  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:01.632570  396441 cri.go:89] found id: ""
	I1213 10:51:01.632584  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.632593  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:01.632598  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:01.632659  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:01.659645  396441 cri.go:89] found id: ""
	I1213 10:51:01.659663  396441 logs.go:282] 0 containers: []
	W1213 10:51:01.659671  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:01.659683  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:01.659694  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:01.689331  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:01.689348  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:01.754743  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:01.754766  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:01.772787  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:01.772804  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:01.858533  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:01.849677   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.850584   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.852497   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.852896   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.854393   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:01.849677   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.850584   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.852497   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.852896   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:01.854393   11524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:01.858545  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:01.858555  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:04.427384  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:04.437715  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:04.437777  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:04.463479  396441 cri.go:89] found id: ""
	I1213 10:51:04.463494  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.463501  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:04.463521  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:04.463580  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:04.491057  396441 cri.go:89] found id: ""
	I1213 10:51:04.491072  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.491079  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:04.491084  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:04.491142  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:04.518458  396441 cri.go:89] found id: ""
	I1213 10:51:04.518471  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.518478  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:04.518483  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:04.518558  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:04.544830  396441 cri.go:89] found id: ""
	I1213 10:51:04.544844  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.544852  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:04.544857  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:04.544915  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:04.571154  396441 cri.go:89] found id: ""
	I1213 10:51:04.571168  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.571177  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:04.571182  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:04.571241  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:04.596261  396441 cri.go:89] found id: ""
	I1213 10:51:04.596275  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.596283  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:04.596288  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:04.596344  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:04.625558  396441 cri.go:89] found id: ""
	I1213 10:51:04.625572  396441 logs.go:282] 0 containers: []
	W1213 10:51:04.625580  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:04.625587  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:04.625598  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:04.656944  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:04.656961  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:04.722740  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:04.722759  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:04.738031  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:04.738051  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:04.817645  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:04.809246   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.810150   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.811791   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.812158   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.813687   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:04.809246   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.810150   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.811791   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.812158   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:04.813687   11625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:04.817655  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:04.817669  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:07.391199  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:07.401600  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:07.401657  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:07.427331  396441 cri.go:89] found id: ""
	I1213 10:51:07.427346  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.427353  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:07.427358  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:07.427417  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:07.452053  396441 cri.go:89] found id: ""
	I1213 10:51:07.452067  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.452074  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:07.452079  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:07.452134  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:07.477750  396441 cri.go:89] found id: ""
	I1213 10:51:07.477764  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.477772  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:07.477777  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:07.477836  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:07.506642  396441 cri.go:89] found id: ""
	I1213 10:51:07.506657  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.506664  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:07.506669  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:07.506727  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:07.533730  396441 cri.go:89] found id: ""
	I1213 10:51:07.533744  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.533751  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:07.533757  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:07.533815  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:07.561505  396441 cri.go:89] found id: ""
	I1213 10:51:07.561521  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.561528  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:07.561534  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:07.561587  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:07.586129  396441 cri.go:89] found id: ""
	I1213 10:51:07.586142  396441 logs.go:282] 0 containers: []
	W1213 10:51:07.586149  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:07.586157  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:07.586167  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:07.601150  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:07.601167  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:07.664624  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:07.656633   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.657400   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.659023   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.659321   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.660870   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:07.656633   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.657400   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.659023   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.659321   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:07.660870   11715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:07.664636  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:07.664649  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:07.733213  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:07.733233  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:07.762844  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:07.762860  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:10.334136  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:10.344504  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:10.344575  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:10.369562  396441 cri.go:89] found id: ""
	I1213 10:51:10.369575  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.369582  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:10.369587  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:10.369652  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:10.399083  396441 cri.go:89] found id: ""
	I1213 10:51:10.399097  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.399104  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:10.399110  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:10.399166  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:10.425761  396441 cri.go:89] found id: ""
	I1213 10:51:10.425786  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.425794  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:10.425799  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:10.425863  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:10.452658  396441 cri.go:89] found id: ""
	I1213 10:51:10.452672  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.452679  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:10.452685  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:10.452741  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:10.477286  396441 cri.go:89] found id: ""
	I1213 10:51:10.477300  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.477308  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:10.477313  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:10.477375  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:10.502400  396441 cri.go:89] found id: ""
	I1213 10:51:10.502414  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.502421  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:10.502427  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:10.502483  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:10.527113  396441 cri.go:89] found id: ""
	I1213 10:51:10.527127  396441 logs.go:282] 0 containers: []
	W1213 10:51:10.527134  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:10.527142  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:10.527152  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:10.558574  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:10.558590  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:10.623165  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:10.623185  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:10.637513  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:10.637528  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:10.700566  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:10.691507   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.692166   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.694005   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.694639   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.696341   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:10.691507   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.692166   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.694005   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.694639   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:10.696341   11835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:10.700576  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:10.700586  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:13.275221  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:13.285371  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:13.285427  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:13.310677  396441 cri.go:89] found id: ""
	I1213 10:51:13.310691  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.310699  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:13.310704  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:13.310766  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:13.339471  396441 cri.go:89] found id: ""
	I1213 10:51:13.339485  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.339493  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:13.339498  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:13.339572  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:13.363772  396441 cri.go:89] found id: ""
	I1213 10:51:13.363787  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.363794  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:13.363799  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:13.363854  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:13.389059  396441 cri.go:89] found id: ""
	I1213 10:51:13.389073  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.389080  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:13.389085  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:13.389140  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:13.414845  396441 cri.go:89] found id: ""
	I1213 10:51:13.414859  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.414866  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:13.414871  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:13.414926  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:13.444040  396441 cri.go:89] found id: ""
	I1213 10:51:13.444054  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.444061  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:13.444066  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:13.444122  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:13.472753  396441 cri.go:89] found id: ""
	I1213 10:51:13.472769  396441 logs.go:282] 0 containers: []
	W1213 10:51:13.472779  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:13.472791  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:13.472806  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:13.487326  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:13.487342  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:13.553218  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:13.543359   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.545061   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.545543   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.547693   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.548343   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:13.543359   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.545061   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.545543   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.547693   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:13.548343   11924 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:13.553229  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:13.553239  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:13.623642  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:13.623662  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:13.652820  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:13.652836  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:16.219667  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:16.229714  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:16.229774  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:16.256550  396441 cri.go:89] found id: ""
	I1213 10:51:16.256564  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.256571  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:16.256576  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:16.256638  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:16.281266  396441 cri.go:89] found id: ""
	I1213 10:51:16.281280  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.281286  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:16.281292  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:16.281347  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:16.313494  396441 cri.go:89] found id: ""
	I1213 10:51:16.313509  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.313517  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:16.313522  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:16.313580  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:16.338750  396441 cri.go:89] found id: ""
	I1213 10:51:16.338775  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.338783  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:16.338788  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:16.338852  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:16.363883  396441 cri.go:89] found id: ""
	I1213 10:51:16.363898  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.363905  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:16.363910  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:16.363980  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:16.390029  396441 cri.go:89] found id: ""
	I1213 10:51:16.390053  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.390060  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:16.390066  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:16.390123  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:16.415617  396441 cri.go:89] found id: ""
	I1213 10:51:16.415630  396441 logs.go:282] 0 containers: []
	W1213 10:51:16.415637  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:16.415645  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:16.415660  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:16.430631  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:16.430647  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:16.492590  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:16.484588   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.485123   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.486621   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.487162   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.488621   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:16.484588   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.485123   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.486621   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.487162   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:16.488621   12030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:16.492603  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:16.492613  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:16.561556  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:16.561578  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:16.589545  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:16.589561  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:19.159792  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:19.170596  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:19.170661  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:19.198953  396441 cri.go:89] found id: ""
	I1213 10:51:19.198967  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.198974  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:19.198979  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:19.199036  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:19.225113  396441 cri.go:89] found id: ""
	I1213 10:51:19.225128  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.225135  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:19.225140  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:19.225195  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:19.250894  396441 cri.go:89] found id: ""
	I1213 10:51:19.250908  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.250916  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:19.250921  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:19.250975  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:19.277076  396441 cri.go:89] found id: ""
	I1213 10:51:19.277091  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.277098  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:19.277103  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:19.277164  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:19.304480  396441 cri.go:89] found id: ""
	I1213 10:51:19.304495  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.304502  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:19.304507  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:19.304567  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:19.330126  396441 cri.go:89] found id: ""
	I1213 10:51:19.330140  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.330147  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:19.330152  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:19.330214  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:19.355882  396441 cri.go:89] found id: ""
	I1213 10:51:19.355896  396441 logs.go:282] 0 containers: []
	W1213 10:51:19.355904  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:19.355912  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:19.355922  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:19.423413  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:19.423435  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:19.457267  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:19.457283  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:19.523500  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:19.523525  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:19.538313  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:19.538329  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:19.607695  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:19.594247   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.594872   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.601540   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.602226   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.603277   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:19.594247   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.594872   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.601540   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.602226   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:19.603277   12148 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:22.108783  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:22.118887  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:22.118946  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:22.146848  396441 cri.go:89] found id: ""
	I1213 10:51:22.146863  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.146870  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:22.146875  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:22.146929  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:22.173022  396441 cri.go:89] found id: ""
	I1213 10:51:22.173036  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.173049  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:22.173055  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:22.173110  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:22.197674  396441 cri.go:89] found id: ""
	I1213 10:51:22.197687  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.197695  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:22.197700  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:22.197757  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:22.225539  396441 cri.go:89] found id: ""
	I1213 10:51:22.225553  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.225560  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:22.225565  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:22.225624  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:22.253269  396441 cri.go:89] found id: ""
	I1213 10:51:22.253282  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.253290  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:22.253294  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:22.253355  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:22.279157  396441 cri.go:89] found id: ""
	I1213 10:51:22.279172  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.279179  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:22.279184  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:22.279238  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:22.308952  396441 cri.go:89] found id: ""
	I1213 10:51:22.308965  396441 logs.go:282] 0 containers: []
	W1213 10:51:22.308972  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:22.308979  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:22.309000  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:22.323813  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:22.323828  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:22.388544  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:22.379305   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.380377   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.381133   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.382647   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.382971   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:22.379305   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.380377   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.381133   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.382647   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:22.382971   12238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:22.388554  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:22.388565  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:22.456639  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:22.456659  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:22.485416  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:22.485432  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:25.052020  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:25.063916  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:25.063975  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:25.100470  396441 cri.go:89] found id: ""
	I1213 10:51:25.100484  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.100492  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:25.100498  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:25.100559  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:25.128317  396441 cri.go:89] found id: ""
	I1213 10:51:25.128331  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.128339  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:25.128344  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:25.128399  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:25.159302  396441 cri.go:89] found id: ""
	I1213 10:51:25.159316  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.159323  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:25.159328  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:25.159386  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:25.186563  396441 cri.go:89] found id: ""
	I1213 10:51:25.186577  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.186591  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:25.186597  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:25.186656  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:25.212652  396441 cri.go:89] found id: ""
	I1213 10:51:25.212666  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.212673  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:25.212678  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:25.212738  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:25.238215  396441 cri.go:89] found id: ""
	I1213 10:51:25.238229  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.238236  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:25.238242  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:25.238314  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:25.264506  396441 cri.go:89] found id: ""
	I1213 10:51:25.264519  396441 logs.go:282] 0 containers: []
	W1213 10:51:25.264526  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:25.264533  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:25.264544  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:25.293035  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:25.293052  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:25.358428  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:25.358448  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:25.373611  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:25.373627  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:25.438267  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:25.430001   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.430492   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.432042   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.432482   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.433912   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:25.430001   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.430492   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.432042   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.432482   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:25.433912   12357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:25.438277  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:25.438288  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:28.007912  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:28.020840  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:28.020914  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:28.054985  396441 cri.go:89] found id: ""
	I1213 10:51:28.054999  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.055007  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:28.055012  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:28.055076  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:28.086101  396441 cri.go:89] found id: ""
	I1213 10:51:28.086116  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.086123  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:28.086128  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:28.086184  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:28.114710  396441 cri.go:89] found id: ""
	I1213 10:51:28.114725  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.114732  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:28.114737  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:28.114796  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:28.141803  396441 cri.go:89] found id: ""
	I1213 10:51:28.141817  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.141825  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:28.141831  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:28.141891  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:28.176974  396441 cri.go:89] found id: ""
	I1213 10:51:28.176989  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.176997  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:28.177002  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:28.177063  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:28.202686  396441 cri.go:89] found id: ""
	I1213 10:51:28.202700  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.202707  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:28.202712  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:28.202777  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:28.229573  396441 cri.go:89] found id: ""
	I1213 10:51:28.229587  396441 logs.go:282] 0 containers: []
	W1213 10:51:28.229595  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:28.229604  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:28.229617  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:28.245053  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:28.245070  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:28.314477  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:28.305602   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.306469   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.307980   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.308612   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.310284   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:28.305602   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.306469   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.307980   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.308612   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:28.310284   12449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:28.314487  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:28.314513  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:28.382755  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:28.382775  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:28.411608  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:28.411626  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:30.977998  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:30.988313  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:30.988371  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:31.017637  396441 cri.go:89] found id: ""
	I1213 10:51:31.017652  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.017659  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:31.017664  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:31.017739  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:31.051049  396441 cri.go:89] found id: ""
	I1213 10:51:31.051064  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.051071  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:31.051076  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:31.051147  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:31.091994  396441 cri.go:89] found id: ""
	I1213 10:51:31.092012  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.092019  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:31.092025  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:31.092087  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:31.121068  396441 cri.go:89] found id: ""
	I1213 10:51:31.121083  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.121090  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:31.121095  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:31.121154  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:31.148227  396441 cri.go:89] found id: ""
	I1213 10:51:31.148240  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.148248  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:31.148253  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:31.148309  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:31.174904  396441 cri.go:89] found id: ""
	I1213 10:51:31.174919  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.174926  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:31.174932  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:31.174996  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:31.200730  396441 cri.go:89] found id: ""
	I1213 10:51:31.200743  396441 logs.go:282] 0 containers: []
	W1213 10:51:31.200750  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:31.200757  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:31.200768  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:31.215296  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:31.215315  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:31.279266  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:31.270976   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.271649   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.273219   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.273818   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.275412   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:31.270976   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.271649   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.273219   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.273818   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:31.275412   12555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:31.279277  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:31.279286  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:31.346253  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:31.346273  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:31.374790  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:31.374805  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:33.942724  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:33.953904  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:33.953965  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:33.979791  396441 cri.go:89] found id: ""
	I1213 10:51:33.979806  396441 logs.go:282] 0 containers: []
	W1213 10:51:33.979813  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:33.979819  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:33.979882  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:34.009113  396441 cri.go:89] found id: ""
	I1213 10:51:34.009129  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.009139  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:34.009145  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:34.009213  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:34.054885  396441 cri.go:89] found id: ""
	I1213 10:51:34.054903  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.054911  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:34.054917  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:34.054978  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:34.087332  396441 cri.go:89] found id: ""
	I1213 10:51:34.087346  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.087354  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:34.087360  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:34.087416  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:34.118541  396441 cri.go:89] found id: ""
	I1213 10:51:34.118556  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.118563  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:34.118568  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:34.118626  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:34.148286  396441 cri.go:89] found id: ""
	I1213 10:51:34.148300  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.148308  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:34.148313  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:34.148368  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:34.174436  396441 cri.go:89] found id: ""
	I1213 10:51:34.174450  396441 logs.go:282] 0 containers: []
	W1213 10:51:34.174457  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:34.174465  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:34.174484  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:34.239233  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:34.239255  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:34.253915  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:34.253932  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:34.319992  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:34.311539   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.312044   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.313591   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.313998   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.315450   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:34.311539   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.312044   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.313591   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.313998   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:34.315450   12660 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:34.320001  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:34.320011  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:34.387971  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:34.387992  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:36.918587  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:36.930360  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:36.930424  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:36.956712  396441 cri.go:89] found id: ""
	I1213 10:51:36.956726  396441 logs.go:282] 0 containers: []
	W1213 10:51:36.956733  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:36.956738  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:36.956795  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:36.982448  396441 cri.go:89] found id: ""
	I1213 10:51:36.982462  396441 logs.go:282] 0 containers: []
	W1213 10:51:36.982469  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:36.982474  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:36.982541  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:37.014971  396441 cri.go:89] found id: ""
	I1213 10:51:37.014987  396441 logs.go:282] 0 containers: []
	W1213 10:51:37.014994  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:37.015000  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:37.015090  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:37.045960  396441 cri.go:89] found id: ""
	I1213 10:51:37.045974  396441 logs.go:282] 0 containers: []
	W1213 10:51:37.045981  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:37.045987  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:37.046044  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:37.077901  396441 cri.go:89] found id: ""
	I1213 10:51:37.077915  396441 logs.go:282] 0 containers: []
	W1213 10:51:37.077933  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:37.077938  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:37.077995  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:37.105187  396441 cri.go:89] found id: ""
	I1213 10:51:37.105207  396441 logs.go:282] 0 containers: []
	W1213 10:51:37.105214  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:37.105220  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:37.105275  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:37.134077  396441 cri.go:89] found id: ""
	I1213 10:51:37.134102  396441 logs.go:282] 0 containers: []
	W1213 10:51:37.134110  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:37.134118  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:37.134129  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:37.199336  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:37.199355  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:37.213787  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:37.213808  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:37.282802  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:37.274301   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.275006   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.276647   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.277214   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.278711   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:37.274301   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.275006   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.276647   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.277214   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:37.278711   12763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:37.282817  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:37.282827  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:37.352930  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:37.352958  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:39.888029  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:39.898120  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:39.898197  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:39.925423  396441 cri.go:89] found id: ""
	I1213 10:51:39.925437  396441 logs.go:282] 0 containers: []
	W1213 10:51:39.925444  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:39.925450  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:39.925510  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:39.951432  396441 cri.go:89] found id: ""
	I1213 10:51:39.951446  396441 logs.go:282] 0 containers: []
	W1213 10:51:39.951454  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:39.951459  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:39.951547  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:39.977216  396441 cri.go:89] found id: ""
	I1213 10:51:39.977231  396441 logs.go:282] 0 containers: []
	W1213 10:51:39.977238  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:39.977244  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:39.977298  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:40.019791  396441 cri.go:89] found id: ""
	I1213 10:51:40.019808  396441 logs.go:282] 0 containers: []
	W1213 10:51:40.019816  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:40.019823  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:40.019900  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:40.051826  396441 cri.go:89] found id: ""
	I1213 10:51:40.051840  396441 logs.go:282] 0 containers: []
	W1213 10:51:40.051847  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:40.051853  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:40.051928  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:40.091165  396441 cri.go:89] found id: ""
	I1213 10:51:40.091192  396441 logs.go:282] 0 containers: []
	W1213 10:51:40.091200  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:40.091206  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:40.091272  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:40.122957  396441 cri.go:89] found id: ""
	I1213 10:51:40.122972  396441 logs.go:282] 0 containers: []
	W1213 10:51:40.122979  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:40.122986  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:40.122998  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:40.186192  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:40.177419   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.178220   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.179932   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.180506   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.182150   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:40.177419   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.178220   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.179932   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.180506   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:40.182150   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:40.186204  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:40.186214  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:40.252986  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:40.253005  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:40.283019  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:40.283042  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:40.347489  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:40.347521  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:42.863361  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:42.874757  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:42.874824  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:42.899348  396441 cri.go:89] found id: ""
	I1213 10:51:42.899362  396441 logs.go:282] 0 containers: []
	W1213 10:51:42.899370  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:42.899375  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:42.899440  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:42.925079  396441 cri.go:89] found id: ""
	I1213 10:51:42.925092  396441 logs.go:282] 0 containers: []
	W1213 10:51:42.925100  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:42.925105  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:42.925165  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:42.951388  396441 cri.go:89] found id: ""
	I1213 10:51:42.951403  396441 logs.go:282] 0 containers: []
	W1213 10:51:42.951410  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:42.951415  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:42.951470  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:42.977668  396441 cri.go:89] found id: ""
	I1213 10:51:42.977682  396441 logs.go:282] 0 containers: []
	W1213 10:51:42.977688  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:42.977694  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:42.977748  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:43.002136  396441 cri.go:89] found id: ""
	I1213 10:51:43.002150  396441 logs.go:282] 0 containers: []
	W1213 10:51:43.002157  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:43.002162  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:43.002219  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:43.038950  396441 cri.go:89] found id: ""
	I1213 10:51:43.038963  396441 logs.go:282] 0 containers: []
	W1213 10:51:43.038971  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:43.038976  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:43.039033  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:43.071573  396441 cri.go:89] found id: ""
	I1213 10:51:43.071588  396441 logs.go:282] 0 containers: []
	W1213 10:51:43.071595  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:43.071602  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:43.071615  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:43.141998  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:43.142019  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:43.157258  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:43.157274  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:43.224710  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:43.216651   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.217035   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.218535   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.218962   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.220859   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:43.216651   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.217035   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.218535   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.218962   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:43.220859   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:43.224720  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:43.224731  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:43.294968  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:43.294988  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:45.825007  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:45.835672  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:45.835743  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:45.861353  396441 cri.go:89] found id: ""
	I1213 10:51:45.861375  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.861382  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:45.861388  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:45.861452  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:45.888508  396441 cri.go:89] found id: ""
	I1213 10:51:45.888522  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.888530  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:45.888534  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:45.888594  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:45.915026  396441 cri.go:89] found id: ""
	I1213 10:51:45.915040  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.915049  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:45.915054  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:45.915108  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:45.940299  396441 cri.go:89] found id: ""
	I1213 10:51:45.940313  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.940320  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:45.940325  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:45.940382  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:45.965643  396441 cri.go:89] found id: ""
	I1213 10:51:45.965657  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.965664  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:45.965669  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:45.965722  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:45.992269  396441 cri.go:89] found id: ""
	I1213 10:51:45.992283  396441 logs.go:282] 0 containers: []
	W1213 10:51:45.992290  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:45.992295  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:45.992354  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:46.024907  396441 cri.go:89] found id: ""
	I1213 10:51:46.024922  396441 logs.go:282] 0 containers: []
	W1213 10:51:46.024941  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:46.024950  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:46.024980  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:46.072645  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:46.072664  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:46.144539  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:46.144569  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:46.160047  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:46.160063  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:46.224857  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:46.216357   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.217032   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.218768   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.219308   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.220994   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:46.216357   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.217032   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.218768   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.219308   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:46.220994   13086 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:46.224867  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:46.224878  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:48.792536  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:48.802577  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:48.802642  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:48.826706  396441 cri.go:89] found id: ""
	I1213 10:51:48.826720  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.826727  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:48.826733  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:48.826787  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:48.851205  396441 cri.go:89] found id: ""
	I1213 10:51:48.851219  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.851226  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:48.851232  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:48.851286  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:48.875646  396441 cri.go:89] found id: ""
	I1213 10:51:48.875661  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.875669  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:48.875674  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:48.875742  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:48.902019  396441 cri.go:89] found id: ""
	I1213 10:51:48.902033  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.902041  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:48.902046  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:48.902102  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:48.926529  396441 cri.go:89] found id: ""
	I1213 10:51:48.926543  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.926550  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:48.926555  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:48.926610  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:48.952549  396441 cri.go:89] found id: ""
	I1213 10:51:48.952563  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.952570  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:48.952576  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:48.952637  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:48.977178  396441 cri.go:89] found id: ""
	I1213 10:51:48.977191  396441 logs.go:282] 0 containers: []
	W1213 10:51:48.977198  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:48.977206  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:48.977218  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:49.044123  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:49.044147  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:49.066217  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:49.066239  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:49.145635  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:49.136657   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.137144   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.139046   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.139577   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.141421   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:49.136657   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.137144   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.139046   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.139577   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:49.141421   13180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:49.145645  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:49.145655  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:49.212965  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:49.212984  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:51.744115  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:51.755896  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:51.755984  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:51.790945  396441 cri.go:89] found id: ""
	I1213 10:51:51.790958  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.790965  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:51.790970  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:51.791024  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:51.816688  396441 cri.go:89] found id: ""
	I1213 10:51:51.816702  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.816709  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:51.816715  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:51.816782  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:51.841873  396441 cri.go:89] found id: ""
	I1213 10:51:51.841886  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.841893  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:51.841898  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:51.841955  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:51.867108  396441 cri.go:89] found id: ""
	I1213 10:51:51.867121  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.867129  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:51.867134  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:51.867187  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:51.892370  396441 cri.go:89] found id: ""
	I1213 10:51:51.892383  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.892390  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:51.892395  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:51.892453  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:51.923043  396441 cri.go:89] found id: ""
	I1213 10:51:51.923057  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.923064  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:51.923069  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:51.923159  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:51.948869  396441 cri.go:89] found id: ""
	I1213 10:51:51.948882  396441 logs.go:282] 0 containers: []
	W1213 10:51:51.948889  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:51.948897  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:51.948926  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:52.018383  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:52.006286   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.007111   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.008967   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.009594   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.011259   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:52.006286   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.007111   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.008967   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.009594   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:52.011259   13273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:52.018405  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:52.018422  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:52.099342  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:52.099363  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:52.136780  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:52.136795  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:52.202388  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:52.202408  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:54.716950  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:54.726860  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:54.726918  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:54.751377  396441 cri.go:89] found id: ""
	I1213 10:51:54.751389  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.751396  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:54.751401  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:54.751460  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:54.776769  396441 cri.go:89] found id: ""
	I1213 10:51:54.776782  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.776801  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:54.776806  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:54.776871  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:54.806646  396441 cri.go:89] found id: ""
	I1213 10:51:54.806659  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.806666  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:54.806671  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:54.806727  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:54.834243  396441 cri.go:89] found id: ""
	I1213 10:51:54.834256  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.834264  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:54.834269  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:54.834322  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:54.859938  396441 cri.go:89] found id: ""
	I1213 10:51:54.859958  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.859965  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:54.859970  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:54.860025  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:54.886545  396441 cri.go:89] found id: ""
	I1213 10:51:54.886559  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.886565  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:54.886571  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:54.886633  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:54.911784  396441 cri.go:89] found id: ""
	I1213 10:51:54.911798  396441 logs.go:282] 0 containers: []
	W1213 10:51:54.911805  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:54.911812  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:54.911828  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:54.973210  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:54.965415   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.965956   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.967424   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.968013   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.969442   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:54.965415   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.965956   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.967424   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.968013   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:54.969442   13376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:54.973220  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:54.973230  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:55.051411  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:55.051430  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:51:55.085480  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:55.085497  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:55.151220  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:55.151241  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:57.666660  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:51:57.676624  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:51:57.676689  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:51:57.702082  396441 cri.go:89] found id: ""
	I1213 10:51:57.702095  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.702103  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:51:57.702108  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:51:57.702171  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:51:57.727577  396441 cri.go:89] found id: ""
	I1213 10:51:57.727591  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.727598  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:51:57.727603  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:51:57.727657  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:51:57.752756  396441 cri.go:89] found id: ""
	I1213 10:51:57.752770  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.752777  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:51:57.752782  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:51:57.752846  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:51:57.778022  396441 cri.go:89] found id: ""
	I1213 10:51:57.778036  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.778043  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:51:57.778048  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:51:57.778108  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:51:57.803300  396441 cri.go:89] found id: ""
	I1213 10:51:57.803314  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.803321  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:51:57.803326  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:51:57.803385  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:51:57.828374  396441 cri.go:89] found id: ""
	I1213 10:51:57.828389  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.828396  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:51:57.828402  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:51:57.828457  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:51:57.854910  396441 cri.go:89] found id: ""
	I1213 10:51:57.854925  396441 logs.go:282] 0 containers: []
	W1213 10:51:57.854947  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:51:57.854955  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:51:57.854965  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:51:57.919106  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:51:57.919126  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:51:57.933832  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:51:57.933847  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:51:58.000903  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:51:57.992995   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.993480   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.994938   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.995239   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.996659   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:51:57.992995   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.993480   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.994938   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.995239   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:51:57.996659   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:51:58.000914  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:51:58.000925  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:51:58.077434  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:51:58.077453  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:00.612878  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:00.623959  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:00.624026  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:00.653620  396441 cri.go:89] found id: ""
	I1213 10:52:00.653635  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.653642  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:00.653647  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:00.653705  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:00.679802  396441 cri.go:89] found id: ""
	I1213 10:52:00.679818  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.679825  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:00.679830  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:00.679890  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:00.706677  396441 cri.go:89] found id: ""
	I1213 10:52:00.706691  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.706698  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:00.706703  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:00.706759  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:00.734612  396441 cri.go:89] found id: ""
	I1213 10:52:00.734627  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.734634  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:00.734640  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:00.734697  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:00.761763  396441 cri.go:89] found id: ""
	I1213 10:52:00.761777  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.761784  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:00.761790  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:00.761846  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:00.790057  396441 cri.go:89] found id: ""
	I1213 10:52:00.790071  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.790078  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:00.790083  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:00.790140  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:00.816353  396441 cri.go:89] found id: ""
	I1213 10:52:00.816367  396441 logs.go:282] 0 containers: []
	W1213 10:52:00.816374  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:00.816381  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:00.816391  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:00.881315  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:00.881335  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:00.896220  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:00.896239  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:00.961380  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:00.953176   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.953559   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.955115   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.955439   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.957035   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:00.953176   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.953559   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.955115   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.955439   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:00.957035   13592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:00.961391  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:00.961401  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:01.031353  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:01.031373  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:03.565879  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:03.575985  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:03.576043  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:03.605780  396441 cri.go:89] found id: ""
	I1213 10:52:03.605794  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.605801  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:03.605807  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:03.605864  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:03.630990  396441 cri.go:89] found id: ""
	I1213 10:52:03.631006  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.631013  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:03.631018  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:03.631073  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:03.658564  396441 cri.go:89] found id: ""
	I1213 10:52:03.658578  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.658585  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:03.658590  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:03.658645  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:03.689093  396441 cri.go:89] found id: ""
	I1213 10:52:03.689108  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.689116  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:03.689121  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:03.689179  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:03.714786  396441 cri.go:89] found id: ""
	I1213 10:52:03.714800  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.714807  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:03.714812  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:03.714870  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:03.741755  396441 cri.go:89] found id: ""
	I1213 10:52:03.741769  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.741777  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:03.741783  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:03.741841  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:03.771487  396441 cri.go:89] found id: ""
	I1213 10:52:03.771502  396441 logs.go:282] 0 containers: []
	W1213 10:52:03.771509  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:03.771538  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:03.771548  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:03.800650  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:03.800666  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:03.866429  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:03.866448  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:03.882243  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:03.882260  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:03.951157  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:03.941996   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.942648   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.944288   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.944871   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.946634   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:03.941996   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.942648   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.944288   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.944871   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:03.946634   13709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:03.951167  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:03.951190  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:06.522609  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:06.532880  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:06.532944  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:06.557937  396441 cri.go:89] found id: ""
	I1213 10:52:06.557952  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.557959  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:06.557965  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:06.558020  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:06.588572  396441 cri.go:89] found id: ""
	I1213 10:52:06.588586  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.588595  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:06.588600  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:06.588660  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:06.614455  396441 cri.go:89] found id: ""
	I1213 10:52:06.614468  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.614476  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:06.614481  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:06.614546  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:06.640258  396441 cri.go:89] found id: ""
	I1213 10:52:06.640272  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.640279  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:06.640285  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:06.640341  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:06.666195  396441 cri.go:89] found id: ""
	I1213 10:52:06.666209  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.666216  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:06.666222  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:06.666278  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:06.690768  396441 cri.go:89] found id: ""
	I1213 10:52:06.690781  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.690788  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:06.690793  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:06.690846  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:06.714814  396441 cri.go:89] found id: ""
	I1213 10:52:06.714828  396441 logs.go:282] 0 containers: []
	W1213 10:52:06.714835  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:06.714842  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:06.714852  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:06.779445  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:06.779463  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:06.794405  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:06.794419  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:06.863881  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:06.854615   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.855387   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.857219   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.857866   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.858840   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:06.854615   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.855387   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.857219   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.857866   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:06.858840   13804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:06.863893  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:06.863903  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:06.931872  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:06.931893  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:09.461689  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:09.471808  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:09.471866  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:09.498684  396441 cri.go:89] found id: ""
	I1213 10:52:09.498698  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.498705  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:09.498710  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:09.498770  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:09.525226  396441 cri.go:89] found id: ""
	I1213 10:52:09.525240  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.525248  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:09.525253  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:09.525312  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:09.552412  396441 cri.go:89] found id: ""
	I1213 10:52:09.552426  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.552433  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:09.552438  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:09.552496  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:09.581636  396441 cri.go:89] found id: ""
	I1213 10:52:09.581650  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.581657  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:09.581662  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:09.581717  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:09.606899  396441 cri.go:89] found id: ""
	I1213 10:52:09.606913  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.606926  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:09.606931  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:09.606985  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:09.635899  396441 cri.go:89] found id: ""
	I1213 10:52:09.635913  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.635920  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:09.635926  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:09.635990  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:09.660294  396441 cri.go:89] found id: ""
	I1213 10:52:09.660308  396441 logs.go:282] 0 containers: []
	W1213 10:52:09.660315  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:09.660322  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:09.660332  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:09.727938  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:09.727956  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:09.742322  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:09.742337  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:09.806667  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:09.798536   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.798981   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.800481   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.800865   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.802370   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:09.798536   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.798981   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.800481   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.800865   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:09.802370   13909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:09.806677  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:09.806688  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:09.873384  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:09.873405  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:12.403419  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:12.413610  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:12.413670  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:12.439264  396441 cri.go:89] found id: ""
	I1213 10:52:12.439277  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.439285  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:12.439290  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:12.439347  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:12.464906  396441 cri.go:89] found id: ""
	I1213 10:52:12.464920  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.464927  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:12.464932  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:12.464988  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:12.498036  396441 cri.go:89] found id: ""
	I1213 10:52:12.498050  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.498057  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:12.498062  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:12.498124  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:12.527408  396441 cri.go:89] found id: ""
	I1213 10:52:12.527424  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.527432  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:12.527437  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:12.527493  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:12.553426  396441 cri.go:89] found id: ""
	I1213 10:52:12.553440  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.553449  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:12.553456  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:12.553512  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:12.577801  396441 cri.go:89] found id: ""
	I1213 10:52:12.577821  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.577829  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:12.577834  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:12.577892  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:12.602596  396441 cri.go:89] found id: ""
	I1213 10:52:12.602610  396441 logs.go:282] 0 containers: []
	W1213 10:52:12.602617  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:12.602625  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:12.602636  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:12.617159  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:12.617175  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:12.679319  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:12.671034   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.671563   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.673241   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.673891   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.675542   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:12.671034   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.671563   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.673241   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.673891   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:12.675542   14011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:12.679331  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:12.679344  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:12.750080  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:12.750100  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:12.781595  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:12.781612  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:15.350487  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:15.360659  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:15.360718  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:15.387859  396441 cri.go:89] found id: ""
	I1213 10:52:15.387872  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.387879  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:15.387885  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:15.387938  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:15.414186  396441 cri.go:89] found id: ""
	I1213 10:52:15.414200  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.414207  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:15.414212  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:15.414279  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:15.441078  396441 cri.go:89] found id: ""
	I1213 10:52:15.441093  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.441099  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:15.441105  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:15.441160  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:15.469023  396441 cri.go:89] found id: ""
	I1213 10:52:15.469038  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.469045  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:15.469051  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:15.469107  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:15.497840  396441 cri.go:89] found id: ""
	I1213 10:52:15.497855  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.497862  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:15.497870  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:15.497929  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:15.527216  396441 cri.go:89] found id: ""
	I1213 10:52:15.527240  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.527248  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:15.527253  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:15.527318  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:15.552512  396441 cri.go:89] found id: ""
	I1213 10:52:15.552526  396441 logs.go:282] 0 containers: []
	W1213 10:52:15.552533  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:15.552541  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:15.552551  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:15.566854  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:15.566872  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:15.630069  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:15.622023   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.622578   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.624163   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.624769   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.626104   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:15.622023   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.622578   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.624163   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.624769   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:15.626104   14112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:15.630081  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:15.630091  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:15.696860  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:15.696880  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:15.724271  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:15.724287  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:18.289647  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:18.301895  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:18.301952  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:18.337658  396441 cri.go:89] found id: ""
	I1213 10:52:18.337672  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.337679  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:18.337684  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:18.337739  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:18.362954  396441 cri.go:89] found id: ""
	I1213 10:52:18.362968  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.362975  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:18.362980  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:18.363038  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:18.388674  396441 cri.go:89] found id: ""
	I1213 10:52:18.388687  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.388694  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:18.388699  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:18.388759  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:18.420176  396441 cri.go:89] found id: ""
	I1213 10:52:18.420189  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.420196  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:18.420202  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:18.420264  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:18.445491  396441 cri.go:89] found id: ""
	I1213 10:52:18.445505  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.445513  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:18.445518  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:18.445579  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:18.470012  396441 cri.go:89] found id: ""
	I1213 10:52:18.470026  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.470034  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:18.470039  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:18.470097  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:18.495243  396441 cri.go:89] found id: ""
	I1213 10:52:18.495257  396441 logs.go:282] 0 containers: []
	W1213 10:52:18.495264  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:18.495271  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:18.495282  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:18.563479  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:18.563500  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:18.578295  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:18.578311  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:18.646148  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:18.637765   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.638446   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.640058   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.640577   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.642125   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:18.637765   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.638446   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.640058   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.640577   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:18.642125   14219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:18.646163  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:18.646174  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:18.718257  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:18.718284  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:21.249994  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:21.259664  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:21.259726  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:21.295330  396441 cri.go:89] found id: ""
	I1213 10:52:21.295344  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.295352  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:21.295359  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:21.295416  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:21.321231  396441 cri.go:89] found id: ""
	I1213 10:52:21.321244  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.321252  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:21.321257  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:21.321315  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:21.352593  396441 cri.go:89] found id: ""
	I1213 10:52:21.352607  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.352615  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:21.352620  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:21.352673  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:21.377931  396441 cri.go:89] found id: ""
	I1213 10:52:21.377946  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.377953  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:21.377959  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:21.378013  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:21.402837  396441 cri.go:89] found id: ""
	I1213 10:52:21.402851  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.402857  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:21.402863  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:21.402917  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:21.431840  396441 cri.go:89] found id: ""
	I1213 10:52:21.431855  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.431862  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:21.431867  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:21.431923  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:21.456743  396441 cri.go:89] found id: ""
	I1213 10:52:21.456757  396441 logs.go:282] 0 containers: []
	W1213 10:52:21.456764  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:21.456772  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:21.456783  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:21.524923  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:21.524943  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:21.539831  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:21.539847  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:21.606862  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:21.598783   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.599644   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.601151   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.601554   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.603029   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:21.598783   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.599644   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.601151   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.601554   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:21.603029   14326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:21.606873  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:21.606883  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:21.674639  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:21.674658  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:24.206551  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:24.216405  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:24.216463  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:24.242228  396441 cri.go:89] found id: ""
	I1213 10:52:24.242242  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.242257  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:24.242262  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:24.242323  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:24.267087  396441 cri.go:89] found id: ""
	I1213 10:52:24.267101  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.267108  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:24.267113  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:24.267165  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:24.309002  396441 cri.go:89] found id: ""
	I1213 10:52:24.309015  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.309022  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:24.309027  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:24.309094  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:24.339349  396441 cri.go:89] found id: ""
	I1213 10:52:24.339362  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.339370  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:24.339375  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:24.339432  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:24.368576  396441 cri.go:89] found id: ""
	I1213 10:52:24.368590  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.368597  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:24.368602  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:24.368659  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:24.394642  396441 cri.go:89] found id: ""
	I1213 10:52:24.394656  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.394663  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:24.394669  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:24.394733  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:24.421211  396441 cri.go:89] found id: ""
	I1213 10:52:24.421225  396441 logs.go:282] 0 containers: []
	W1213 10:52:24.421232  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:24.421240  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:24.421250  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:24.487558  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:24.479220   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.479760   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.481451   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.481967   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.483636   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:24.479220   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.479760   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.481451   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.481967   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:24.483636   14425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:24.487569  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:24.487579  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:24.558449  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:24.558469  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:24.588318  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:24.588333  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:24.654250  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:24.654270  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:27.169201  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:27.180049  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:27.180109  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:27.206061  396441 cri.go:89] found id: ""
	I1213 10:52:27.206075  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.206082  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:27.206096  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:27.206154  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:27.233191  396441 cri.go:89] found id: ""
	I1213 10:52:27.233205  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.233214  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:27.233219  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:27.233281  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:27.260006  396441 cri.go:89] found id: ""
	I1213 10:52:27.260026  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.260034  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:27.260039  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:27.260097  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:27.297935  396441 cri.go:89] found id: ""
	I1213 10:52:27.297949  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.297956  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:27.297962  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:27.298016  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:27.327550  396441 cri.go:89] found id: ""
	I1213 10:52:27.327564  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.327571  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:27.327576  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:27.327632  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:27.357264  396441 cri.go:89] found id: ""
	I1213 10:52:27.357277  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.357285  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:27.357290  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:27.357345  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:27.386557  396441 cri.go:89] found id: ""
	I1213 10:52:27.386571  396441 logs.go:282] 0 containers: []
	W1213 10:52:27.386579  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:27.386587  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:27.386600  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:27.451879  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:27.451900  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:27.466743  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:27.466762  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:27.534974  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:27.526464   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.527041   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.528790   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.529428   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.530940   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:27.526464   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.527041   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.528790   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.529428   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:27.530940   14533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:27.534984  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:27.534996  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:27.603674  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:27.603693  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:30.134007  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:30.145384  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:30.145454  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:30.177035  396441 cri.go:89] found id: ""
	I1213 10:52:30.177050  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.177058  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:30.177063  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:30.177121  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:30.203582  396441 cri.go:89] found id: ""
	I1213 10:52:30.203597  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.203604  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:30.203609  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:30.203689  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:30.230074  396441 cri.go:89] found id: ""
	I1213 10:52:30.230088  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.230106  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:30.230112  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:30.230183  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:30.255406  396441 cri.go:89] found id: ""
	I1213 10:52:30.255431  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.255439  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:30.255445  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:30.255527  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:30.302847  396441 cri.go:89] found id: ""
	I1213 10:52:30.302861  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.302869  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:30.302876  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:30.302931  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:30.345708  396441 cri.go:89] found id: ""
	I1213 10:52:30.345722  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.345730  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:30.345735  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:30.345794  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:30.373285  396441 cri.go:89] found id: ""
	I1213 10:52:30.373298  396441 logs.go:282] 0 containers: []
	W1213 10:52:30.373305  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:30.373313  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:30.373323  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:30.438965  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:30.438984  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:30.453939  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:30.453957  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:30.519205  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:30.509989   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.510631   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.512097   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.512762   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.515602   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:30.509989   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.510631   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.512097   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.512762   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:30.515602   14638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:30.519233  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:30.519245  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:30.587307  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:30.587327  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:33.117585  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:33.128213  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:33.128278  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:33.159433  396441 cri.go:89] found id: ""
	I1213 10:52:33.159447  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.159455  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:33.159462  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:33.159561  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:33.188876  396441 cri.go:89] found id: ""
	I1213 10:52:33.188890  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.188898  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:33.188904  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:33.188959  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:33.213013  396441 cri.go:89] found id: ""
	I1213 10:52:33.213026  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.213033  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:33.213038  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:33.213098  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:33.237950  396441 cri.go:89] found id: ""
	I1213 10:52:33.237964  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.237971  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:33.237976  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:33.238030  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:33.262873  396441 cri.go:89] found id: ""
	I1213 10:52:33.262887  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.262894  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:33.262899  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:33.262955  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:33.289230  396441 cri.go:89] found id: ""
	I1213 10:52:33.289243  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.289250  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:33.289256  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:33.289312  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:33.322162  396441 cri.go:89] found id: ""
	I1213 10:52:33.322175  396441 logs.go:282] 0 containers: []
	W1213 10:52:33.322182  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:33.322196  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:33.322206  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:33.350122  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:33.350138  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:33.415463  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:33.415483  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:33.430091  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:33.430108  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:33.492694  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:33.484780   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.485349   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.486880   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.487242   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.488741   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:33.484780   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.485349   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.486880   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.487242   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:33.488741   14752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:33.492704  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:33.492713  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:36.059928  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:36.071377  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:36.071452  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:36.097664  396441 cri.go:89] found id: ""
	I1213 10:52:36.097678  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.097685  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:36.097691  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:36.097753  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:36.123266  396441 cri.go:89] found id: ""
	I1213 10:52:36.123280  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.123287  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:36.123292  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:36.123348  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:36.149443  396441 cri.go:89] found id: ""
	I1213 10:52:36.149456  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.149464  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:36.149469  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:36.149525  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:36.174882  396441 cri.go:89] found id: ""
	I1213 10:52:36.174896  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.174903  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:36.174909  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:36.174965  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:36.204325  396441 cri.go:89] found id: ""
	I1213 10:52:36.204348  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.204356  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:36.204362  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:36.204427  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:36.234444  396441 cri.go:89] found id: ""
	I1213 10:52:36.234457  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.234474  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:36.234479  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:36.234550  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:36.259366  396441 cri.go:89] found id: ""
	I1213 10:52:36.259390  396441 logs.go:282] 0 containers: []
	W1213 10:52:36.259397  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:36.259406  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:36.259416  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:36.332816  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:36.332834  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:36.348343  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:36.348362  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:36.412337  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:36.404175   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.404717   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.406173   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.406606   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.408021   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:36.404175   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.404717   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.406173   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.406606   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:36.408021   14847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:36.412348  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:36.412358  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:36.480447  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:36.480469  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:39.011418  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:39.022791  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:39.022856  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:39.048926  396441 cri.go:89] found id: ""
	I1213 10:52:39.048939  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.048946  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:39.048951  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:39.049008  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:39.074187  396441 cri.go:89] found id: ""
	I1213 10:52:39.074201  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.074209  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:39.074214  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:39.074274  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:39.099262  396441 cri.go:89] found id: ""
	I1213 10:52:39.099275  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.099282  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:39.099288  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:39.099351  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:39.123854  396441 cri.go:89] found id: ""
	I1213 10:52:39.123868  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.123876  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:39.123881  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:39.123935  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:39.148849  396441 cri.go:89] found id: ""
	I1213 10:52:39.148864  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.148871  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:39.148876  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:39.148937  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:39.178852  396441 cri.go:89] found id: ""
	I1213 10:52:39.178866  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.178873  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:39.178879  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:39.178936  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:39.203878  396441 cri.go:89] found id: ""
	I1213 10:52:39.203892  396441 logs.go:282] 0 containers: []
	W1213 10:52:39.203899  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:39.203907  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:39.203921  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:39.270764  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:39.270783  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:39.286957  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:39.286976  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:39.359682  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:39.351441   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.352404   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.354057   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.354437   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.355940   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:39.351441   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.352404   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.354057   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.354437   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:39.355940   14951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:39.359693  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:39.359707  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:39.429853  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:39.429874  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:41.960684  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:41.971667  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:41.971727  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:42.002821  396441 cri.go:89] found id: ""
	I1213 10:52:42.002836  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.002844  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:42.002849  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:42.002914  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:42.045054  396441 cri.go:89] found id: ""
	I1213 10:52:42.045068  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.045075  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:42.045080  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:42.045141  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:42.077836  396441 cri.go:89] found id: ""
	I1213 10:52:42.077852  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.077865  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:42.077871  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:42.077947  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:42.115684  396441 cri.go:89] found id: ""
	I1213 10:52:42.115706  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.115714  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:42.115729  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:42.115828  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:42.147177  396441 cri.go:89] found id: ""
	I1213 10:52:42.147194  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.147202  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:42.147208  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:42.147280  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:42.180144  396441 cri.go:89] found id: ""
	I1213 10:52:42.180165  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.180174  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:42.180181  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:42.180255  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:42.220442  396441 cri.go:89] found id: ""
	I1213 10:52:42.220457  396441 logs.go:282] 0 containers: []
	W1213 10:52:42.220466  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:42.220475  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:42.220486  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:42.297964  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:42.297984  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:42.315552  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:42.315571  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:42.388538  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:42.380217   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.380830   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.382313   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.382956   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.384571   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:42.380217   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.380830   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.382313   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.382956   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:42.384571   15060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:42.388548  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:42.388558  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:42.457255  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:42.457276  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:44.987527  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:44.999384  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:44.999443  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:45.050333  396441 cri.go:89] found id: ""
	I1213 10:52:45.050351  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.050366  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:45.050372  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:45.050449  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:45.102093  396441 cri.go:89] found id: ""
	I1213 10:52:45.102110  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.102126  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:45.102132  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:45.102218  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:45.141159  396441 cri.go:89] found id: ""
	I1213 10:52:45.141176  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.141184  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:45.141190  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:45.141265  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:45.181959  396441 cri.go:89] found id: ""
	I1213 10:52:45.181976  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.181994  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:45.182000  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:45.182074  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:45.231005  396441 cri.go:89] found id: ""
	I1213 10:52:45.231020  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.231027  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:45.231033  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:45.231103  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:45.269802  396441 cri.go:89] found id: ""
	I1213 10:52:45.269816  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.269824  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:45.269829  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:45.269906  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:45.302267  396441 cri.go:89] found id: ""
	I1213 10:52:45.302281  396441 logs.go:282] 0 containers: []
	W1213 10:52:45.302289  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:45.302297  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:45.302307  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:45.375709  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:45.375731  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:45.390641  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:45.390662  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:45.456742  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:45.449052   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.449482   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.451067   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.451394   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.452876   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:45.449052   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.449482   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.451067   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.451394   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:45.452876   15166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:45.456753  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:45.456763  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:45.525649  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:45.525668  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:48.060311  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:48.071648  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:48.071715  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:48.102851  396441 cri.go:89] found id: ""
	I1213 10:52:48.102865  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.102872  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:48.102878  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:48.102948  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:48.128470  396441 cri.go:89] found id: ""
	I1213 10:52:48.128485  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.128492  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:48.128499  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:48.128556  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:48.155177  396441 cri.go:89] found id: ""
	I1213 10:52:48.155197  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.155205  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:48.155210  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:48.155265  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:48.182358  396441 cri.go:89] found id: ""
	I1213 10:52:48.182373  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.182380  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:48.182385  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:48.182447  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:48.208531  396441 cri.go:89] found id: ""
	I1213 10:52:48.208550  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.208557  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:48.208562  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:48.208616  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:48.234008  396441 cri.go:89] found id: ""
	I1213 10:52:48.234023  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.234031  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:48.234036  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:48.234093  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:48.261447  396441 cri.go:89] found id: ""
	I1213 10:52:48.261461  396441 logs.go:282] 0 containers: []
	W1213 10:52:48.261469  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:48.261480  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:48.261492  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:48.278413  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:48.278429  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:48.358811  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:48.350678   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.351326   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.352876   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.353394   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.354912   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:48.350678   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.351326   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.352876   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.353394   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:48.354912   15267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:48.358821  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:48.358832  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:48.433414  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:48.433443  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:48.466431  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:48.466452  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:51.033966  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:51.044258  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:51.044317  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:51.072809  396441 cri.go:89] found id: ""
	I1213 10:52:51.072823  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.072830  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:51.072836  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:51.072895  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:51.102333  396441 cri.go:89] found id: ""
	I1213 10:52:51.102346  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.102353  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:51.102358  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:51.102415  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:51.128414  396441 cri.go:89] found id: ""
	I1213 10:52:51.128427  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.128434  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:51.128439  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:51.128494  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:51.154902  396441 cri.go:89] found id: ""
	I1213 10:52:51.154916  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.154923  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:51.154928  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:51.154983  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:51.182112  396441 cri.go:89] found id: ""
	I1213 10:52:51.182126  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.182133  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:51.182143  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:51.182197  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:51.207919  396441 cri.go:89] found id: ""
	I1213 10:52:51.207933  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.207941  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:51.207946  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:51.208001  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:51.234193  396441 cri.go:89] found id: ""
	I1213 10:52:51.234207  396441 logs.go:282] 0 containers: []
	W1213 10:52:51.234214  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:51.234222  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:51.234238  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:51.303042  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:51.303060  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:51.321366  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:51.321383  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:51.393364  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:51.385234   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.385964   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.387481   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.387938   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.389445   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:51.385234   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.385964   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.387481   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.387938   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:51.389445   15377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:51.393375  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:51.393385  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:51.461747  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:51.461768  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:53.992488  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:54.002605  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:54.002667  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:54.037835  396441 cri.go:89] found id: ""
	I1213 10:52:54.037849  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.037857  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:54.037862  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:54.037934  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:54.066982  396441 cri.go:89] found id: ""
	I1213 10:52:54.066998  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.067009  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:54.067015  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:54.067074  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:54.093461  396441 cri.go:89] found id: ""
	I1213 10:52:54.093475  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.093482  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:54.093487  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:54.093544  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:54.123249  396441 cri.go:89] found id: ""
	I1213 10:52:54.123263  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.123271  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:54.123276  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:54.123333  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:54.150103  396441 cri.go:89] found id: ""
	I1213 10:52:54.150116  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.150124  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:54.150130  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:54.150186  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:54.176271  396441 cri.go:89] found id: ""
	I1213 10:52:54.176285  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.176291  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:54.176296  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:54.176355  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:54.204655  396441 cri.go:89] found id: ""
	I1213 10:52:54.204669  396441 logs.go:282] 0 containers: []
	W1213 10:52:54.204676  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:54.204684  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:54.204695  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:54.270252  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:54.259997   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.260697   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.262376   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.262983   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.264572   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:54.259997   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.260697   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.262376   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.262983   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:54.264572   15474 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:54.270262  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:54.270272  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:54.345996  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:54.346016  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:54.383713  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:54.383730  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:54.450349  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:54.450368  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:56.966888  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:56.976557  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:56.976616  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:57.007803  396441 cri.go:89] found id: ""
	I1213 10:52:57.007828  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.007836  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:57.007842  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:57.007910  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:57.035051  396441 cri.go:89] found id: ""
	I1213 10:52:57.035065  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.035073  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:57.035078  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:57.035137  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:52:57.060632  396441 cri.go:89] found id: ""
	I1213 10:52:57.060645  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.060652  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:52:57.060657  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:52:57.060716  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:52:57.090660  396441 cri.go:89] found id: ""
	I1213 10:52:57.090674  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.090681  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:52:57.090686  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:52:57.090741  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:52:57.115624  396441 cri.go:89] found id: ""
	I1213 10:52:57.115638  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.115645  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:52:57.115650  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:52:57.115718  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:52:57.146066  396441 cri.go:89] found id: ""
	I1213 10:52:57.146080  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.146087  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:52:57.146093  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:52:57.146147  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:52:57.174574  396441 cri.go:89] found id: ""
	I1213 10:52:57.174589  396441 logs.go:282] 0 containers: []
	W1213 10:52:57.174596  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:52:57.174604  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:52:57.174614  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:52:57.202471  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:52:57.202487  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:52:57.267828  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:52:57.267852  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:52:57.284906  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:52:57.284922  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:52:57.357618  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:52:57.350279   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.350835   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.351877   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.352319   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.353722   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:52:57.350279   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.350835   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.351877   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.352319   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:52:57.353722   15604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:52:57.357629  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:52:57.357641  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:52:59.928373  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:52:59.939417  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:52:59.939503  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:52:59.968871  396441 cri.go:89] found id: ""
	I1213 10:52:59.968885  396441 logs.go:282] 0 containers: []
	W1213 10:52:59.968892  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:52:59.968897  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:52:59.968952  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:52:59.994167  396441 cri.go:89] found id: ""
	I1213 10:52:59.994181  396441 logs.go:282] 0 containers: []
	W1213 10:52:59.994188  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:52:59.994192  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:52:59.994244  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:00.051356  396441 cri.go:89] found id: ""
	I1213 10:53:00.051372  396441 logs.go:282] 0 containers: []
	W1213 10:53:00.051380  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:00.051386  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:00.051453  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:00.143874  396441 cri.go:89] found id: ""
	I1213 10:53:00.143902  396441 logs.go:282] 0 containers: []
	W1213 10:53:00.143910  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:00.143915  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:00.143990  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:00.245636  396441 cri.go:89] found id: ""
	I1213 10:53:00.245660  396441 logs.go:282] 0 containers: []
	W1213 10:53:00.245669  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:00.245676  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:00.245762  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:00.304351  396441 cri.go:89] found id: ""
	I1213 10:53:00.304370  396441 logs.go:282] 0 containers: []
	W1213 10:53:00.304378  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:00.304384  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:00.304463  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:00.342460  396441 cri.go:89] found id: ""
	I1213 10:53:00.342483  396441 logs.go:282] 0 containers: []
	W1213 10:53:00.342492  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:00.342503  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:00.342552  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:00.422913  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:00.413257   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.414124   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.416191   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.416801   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.418644   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:00.413257   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.414124   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.416191   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.416801   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:00.418644   15693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:00.422924  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:00.422935  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:00.494010  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:00.494031  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:00.523384  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:00.523401  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:00.590600  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:00.590620  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:03.105926  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:03.116415  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:03.116476  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:03.148167  396441 cri.go:89] found id: ""
	I1213 10:53:03.148181  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.148189  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:03.148195  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:03.148255  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:03.173610  396441 cri.go:89] found id: ""
	I1213 10:53:03.173624  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.173633  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:03.173638  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:03.173698  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:03.198406  396441 cri.go:89] found id: ""
	I1213 10:53:03.198420  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.198427  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:03.198432  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:03.198494  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:03.228196  396441 cri.go:89] found id: ""
	I1213 10:53:03.228210  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.228218  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:03.228223  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:03.228284  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:03.258506  396441 cri.go:89] found id: ""
	I1213 10:53:03.258539  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.258547  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:03.258552  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:03.258617  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:03.293938  396441 cri.go:89] found id: ""
	I1213 10:53:03.293951  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.293968  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:03.293973  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:03.294029  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:03.322417  396441 cri.go:89] found id: ""
	I1213 10:53:03.322441  396441 logs.go:282] 0 containers: []
	W1213 10:53:03.322448  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:03.322456  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:03.322467  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:03.338484  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:03.338500  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:03.404903  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:03.396282   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.397052   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.398807   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.399322   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.400968   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:03.396282   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.397052   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.398807   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.399322   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:03.400968   15802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:03.404913  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:03.404930  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:03.476102  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:03.476122  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:03.508468  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:03.508484  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:06.073576  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:06.084007  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:06.084073  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:06.110819  396441 cri.go:89] found id: ""
	I1213 10:53:06.110834  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.110841  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:06.110847  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:06.110915  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:06.136257  396441 cri.go:89] found id: ""
	I1213 10:53:06.136271  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.136278  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:06.136286  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:06.136344  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:06.162392  396441 cri.go:89] found id: ""
	I1213 10:53:06.162406  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.162413  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:06.162419  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:06.162479  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:06.191163  396441 cri.go:89] found id: ""
	I1213 10:53:06.191178  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.191185  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:06.191190  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:06.191244  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:06.217747  396441 cri.go:89] found id: ""
	I1213 10:53:06.217761  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.217769  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:06.217774  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:06.217829  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:06.242838  396441 cri.go:89] found id: ""
	I1213 10:53:06.242851  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.242858  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:06.242864  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:06.242918  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:06.267811  396441 cri.go:89] found id: ""
	I1213 10:53:06.267831  396441 logs.go:282] 0 containers: []
	W1213 10:53:06.267838  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:06.267846  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:06.267857  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:06.351297  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:06.343103   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.343800   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.345275   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.345736   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.347181   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:06.343103   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.343800   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.345275   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.345736   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:06.347181   15903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:06.351310  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:06.351321  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:06.418677  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:06.418696  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:06.456760  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:06.456778  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:06.525341  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:06.525362  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:09.044095  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:09.054348  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:09.054410  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:09.081344  396441 cri.go:89] found id: ""
	I1213 10:53:09.081358  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.081365  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:09.081376  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:09.081434  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:09.107998  396441 cri.go:89] found id: ""
	I1213 10:53:09.108012  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.108019  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:09.108024  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:09.108084  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:09.133582  396441 cri.go:89] found id: ""
	I1213 10:53:09.133596  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.133603  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:09.133608  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:09.133666  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:09.158646  396441 cri.go:89] found id: ""
	I1213 10:53:09.158669  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.158677  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:09.158682  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:09.158746  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:09.184013  396441 cri.go:89] found id: ""
	I1213 10:53:09.184028  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.184035  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:09.184040  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:09.184097  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:09.210338  396441 cri.go:89] found id: ""
	I1213 10:53:09.210352  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.210370  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:09.210376  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:09.210434  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:09.236029  396441 cri.go:89] found id: ""
	I1213 10:53:09.236045  396441 logs.go:282] 0 containers: []
	W1213 10:53:09.236052  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:09.236059  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:09.236069  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:09.310970  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:09.298395   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.303364   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.304232   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.305803   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.306103   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:09.298395   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.303364   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.304232   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.305803   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:09.306103   16004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:09.310981  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:09.310992  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:09.380678  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:09.380700  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:09.413354  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:09.413371  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:09.481585  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:09.481603  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:11.996259  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:12.009133  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:12.009217  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:12.044141  396441 cri.go:89] found id: ""
	I1213 10:53:12.044157  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.044164  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:12.044170  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:12.044230  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:12.070547  396441 cri.go:89] found id: ""
	I1213 10:53:12.070579  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.070587  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:12.070598  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:12.070664  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:12.095879  396441 cri.go:89] found id: ""
	I1213 10:53:12.095893  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.095900  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:12.095905  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:12.095965  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:12.125533  396441 cri.go:89] found id: ""
	I1213 10:53:12.125547  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.125554  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:12.125559  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:12.125618  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:12.151281  396441 cri.go:89] found id: ""
	I1213 10:53:12.151303  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.151311  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:12.151317  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:12.151385  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:12.176331  396441 cri.go:89] found id: ""
	I1213 10:53:12.176353  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.176361  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:12.176366  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:12.176433  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:12.202465  396441 cri.go:89] found id: ""
	I1213 10:53:12.202486  396441 logs.go:282] 0 containers: []
	W1213 10:53:12.202493  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:12.202500  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:12.202523  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:12.268244  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:12.268263  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:12.285364  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:12.285379  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:12.357173  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:12.347625   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.348521   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.350379   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.350883   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.352352   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:12.347625   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.348521   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.350379   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.350883   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:12.352352   16121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:12.357192  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:12.357204  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:12.424809  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:12.424830  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:14.955688  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:14.967057  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:14.967115  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:14.993136  396441 cri.go:89] found id: ""
	I1213 10:53:14.993150  396441 logs.go:282] 0 containers: []
	W1213 10:53:14.993157  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:14.993163  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:14.993220  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:15.028691  396441 cri.go:89] found id: ""
	I1213 10:53:15.028707  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.028722  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:15.028728  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:15.028794  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:15.056676  396441 cri.go:89] found id: ""
	I1213 10:53:15.056705  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.056732  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:15.056739  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:15.056800  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:15.085199  396441 cri.go:89] found id: ""
	I1213 10:53:15.085213  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.085221  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:15.085226  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:15.085288  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:15.113074  396441 cri.go:89] found id: ""
	I1213 10:53:15.113088  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.113095  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:15.113101  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:15.113159  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:15.142568  396441 cri.go:89] found id: ""
	I1213 10:53:15.142581  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.142589  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:15.142595  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:15.142655  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:15.167430  396441 cri.go:89] found id: ""
	I1213 10:53:15.167443  396441 logs.go:282] 0 containers: []
	W1213 10:53:15.167450  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:15.167458  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:15.167471  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:15.233925  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:15.233946  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:15.248849  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:15.248866  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:15.332377  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:15.324322   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.325030   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.326689   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.327007   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.328464   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:15.324322   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.325030   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.326689   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.327007   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:15.328464   16226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:15.332397  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:15.332409  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:15.401263  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:15.401283  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:17.930625  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:17.940643  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:17.940703  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:17.965657  396441 cri.go:89] found id: ""
	I1213 10:53:17.965671  396441 logs.go:282] 0 containers: []
	W1213 10:53:17.965678  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:17.965683  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:17.965740  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:17.990612  396441 cri.go:89] found id: ""
	I1213 10:53:17.990635  396441 logs.go:282] 0 containers: []
	W1213 10:53:17.990642  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:17.990648  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:17.990723  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:18.025034  396441 cri.go:89] found id: ""
	I1213 10:53:18.025049  396441 logs.go:282] 0 containers: []
	W1213 10:53:18.025057  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:18.025063  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:18.025123  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:18.052589  396441 cri.go:89] found id: ""
	I1213 10:53:18.052611  396441 logs.go:282] 0 containers: []
	W1213 10:53:18.052619  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:18.052625  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:18.052683  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:18.079906  396441 cri.go:89] found id: ""
	I1213 10:53:18.079921  396441 logs.go:282] 0 containers: []
	W1213 10:53:18.079929  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:18.079935  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:18.079997  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:18.107302  396441 cri.go:89] found id: ""
	I1213 10:53:18.107327  396441 logs.go:282] 0 containers: []
	W1213 10:53:18.107335  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:18.107340  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:18.107409  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:18.135776  396441 cri.go:89] found id: ""
	I1213 10:53:18.135790  396441 logs.go:282] 0 containers: []
	W1213 10:53:18.135797  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:18.135805  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:18.135815  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:18.153173  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:18.153189  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:18.221544  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:18.213144   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.213793   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.215340   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.215838   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.217560   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:18.213144   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.213793   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.215340   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.215838   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:18.217560   16332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:18.221554  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:18.221565  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:18.296047  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:18.296072  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:18.330043  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:18.330063  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:20.909395  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:20.919737  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:20.919799  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:20.946000  396441 cri.go:89] found id: ""
	I1213 10:53:20.946014  396441 logs.go:282] 0 containers: []
	W1213 10:53:20.946022  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:20.946027  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:20.946084  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:20.975734  396441 cri.go:89] found id: ""
	I1213 10:53:20.975749  396441 logs.go:282] 0 containers: []
	W1213 10:53:20.975756  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:20.975761  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:20.975815  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:21.000961  396441 cri.go:89] found id: ""
	I1213 10:53:21.000976  396441 logs.go:282] 0 containers: []
	W1213 10:53:21.000983  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:21.000988  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:21.001043  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:21.027875  396441 cri.go:89] found id: ""
	I1213 10:53:21.027889  396441 logs.go:282] 0 containers: []
	W1213 10:53:21.027896  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:21.027902  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:21.027963  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:21.053113  396441 cri.go:89] found id: ""
	I1213 10:53:21.053127  396441 logs.go:282] 0 containers: []
	W1213 10:53:21.053134  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:21.053140  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:21.053198  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:21.078404  396441 cri.go:89] found id: ""
	I1213 10:53:21.078418  396441 logs.go:282] 0 containers: []
	W1213 10:53:21.078425  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:21.078430  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:21.078484  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:21.103558  396441 cri.go:89] found id: ""
	I1213 10:53:21.103571  396441 logs.go:282] 0 containers: []
	W1213 10:53:21.103579  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:21.103592  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:21.103604  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:21.172527  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:21.172545  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:21.187768  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:21.187785  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:21.256696  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:21.248073   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.249061   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.249753   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.251203   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.251711   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:21.248073   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.249061   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.249753   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.251203   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:21.251711   16438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:21.256707  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:21.256717  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:21.327132  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:21.327151  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:23.867087  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:23.877218  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:23.877278  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:23.901809  396441 cri.go:89] found id: ""
	I1213 10:53:23.901824  396441 logs.go:282] 0 containers: []
	W1213 10:53:23.901831  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:23.901836  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:23.901892  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:23.928024  396441 cri.go:89] found id: ""
	I1213 10:53:23.928038  396441 logs.go:282] 0 containers: []
	W1213 10:53:23.928044  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:23.928051  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:23.928104  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:23.953141  396441 cri.go:89] found id: ""
	I1213 10:53:23.953154  396441 logs.go:282] 0 containers: []
	W1213 10:53:23.953161  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:23.953166  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:23.953223  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:23.981670  396441 cri.go:89] found id: ""
	I1213 10:53:23.981684  396441 logs.go:282] 0 containers: []
	W1213 10:53:23.981691  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:23.981696  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:23.981754  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:24.014889  396441 cri.go:89] found id: ""
	I1213 10:53:24.014904  396441 logs.go:282] 0 containers: []
	W1213 10:53:24.014912  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:24.014917  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:24.014982  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:24.041025  396441 cri.go:89] found id: ""
	I1213 10:53:24.041040  396441 logs.go:282] 0 containers: []
	W1213 10:53:24.041047  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:24.041052  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:24.041110  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:24.068555  396441 cri.go:89] found id: ""
	I1213 10:53:24.068570  396441 logs.go:282] 0 containers: []
	W1213 10:53:24.068578  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:24.068586  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:24.068596  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:24.082803  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:24.082819  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:24.145822  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:24.137676   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.138215   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.139944   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.140400   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.141928   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:24.137676   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.138215   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.139944   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.140400   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:24.141928   16542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:24.145832  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:24.145843  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:24.213727  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:24.213747  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:24.241111  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:24.241126  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:26.808221  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:26.818590  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:26.818659  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:26.848553  396441 cri.go:89] found id: ""
	I1213 10:53:26.848568  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.848575  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:26.848580  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:26.848636  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:26.878256  396441 cri.go:89] found id: ""
	I1213 10:53:26.878274  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.878281  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:26.878288  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:26.878343  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:26.905040  396441 cri.go:89] found id: ""
	I1213 10:53:26.905054  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.905061  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:26.905067  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:26.905140  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:26.933587  396441 cri.go:89] found id: ""
	I1213 10:53:26.933601  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.933608  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:26.933613  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:26.933669  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:26.958154  396441 cri.go:89] found id: ""
	I1213 10:53:26.958167  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.958175  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:26.958180  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:26.958240  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:26.986142  396441 cri.go:89] found id: ""
	I1213 10:53:26.986156  396441 logs.go:282] 0 containers: []
	W1213 10:53:26.986164  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:26.986169  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:26.986222  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:27.013602  396441 cri.go:89] found id: ""
	I1213 10:53:27.013617  396441 logs.go:282] 0 containers: []
	W1213 10:53:27.013625  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:27.013633  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:27.013643  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:27.080830  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:27.080850  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:27.109824  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:27.109839  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:27.175975  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:27.176002  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:27.190437  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:27.190456  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:27.254921  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:27.245674   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.246416   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.248026   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.248660   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.250260   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:27.245674   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.246416   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.248026   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.248660   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:27.250260   16662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:29.755755  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:29.767564  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:29.767645  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:29.797908  396441 cri.go:89] found id: ""
	I1213 10:53:29.797922  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.797929  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:29.797935  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:29.797994  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:29.824494  396441 cri.go:89] found id: ""
	I1213 10:53:29.824508  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.824516  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:29.824521  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:29.824577  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:29.853869  396441 cri.go:89] found id: ""
	I1213 10:53:29.853883  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.853890  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:29.853895  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:29.853951  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:29.883491  396441 cri.go:89] found id: ""
	I1213 10:53:29.883504  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.883526  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:29.883531  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:29.883590  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:29.908921  396441 cri.go:89] found id: ""
	I1213 10:53:29.908935  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.908943  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:29.908948  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:29.909004  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:29.938464  396441 cri.go:89] found id: ""
	I1213 10:53:29.938478  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.938485  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:29.938490  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:29.938568  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:29.964642  396441 cri.go:89] found id: ""
	I1213 10:53:29.964658  396441 logs.go:282] 0 containers: []
	W1213 10:53:29.964665  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:29.964672  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:29.964682  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:30.032663  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:30.032688  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:30.050167  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:30.050188  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:30.119376  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:30.110113   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.110970   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.112364   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.113033   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.114675   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:30.110113   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.110970   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.112364   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.113033   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:30.114675   16754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:30.119387  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:30.119398  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:30.188285  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:30.188307  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:32.723464  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:32.734250  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:32.734319  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:32.760154  396441 cri.go:89] found id: ""
	I1213 10:53:32.760168  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.760175  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:32.760180  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:32.760237  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:32.788893  396441 cri.go:89] found id: ""
	I1213 10:53:32.788906  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.788913  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:32.788918  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:32.788973  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:32.815801  396441 cri.go:89] found id: ""
	I1213 10:53:32.815815  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.815822  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:32.815827  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:32.815884  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:32.840740  396441 cri.go:89] found id: ""
	I1213 10:53:32.840754  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.840761  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:32.840766  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:32.840820  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:32.865881  396441 cri.go:89] found id: ""
	I1213 10:53:32.865895  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.865902  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:32.865907  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:32.865962  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:32.891687  396441 cri.go:89] found id: ""
	I1213 10:53:32.891702  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.891709  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:32.891714  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:32.891768  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:32.918219  396441 cri.go:89] found id: ""
	I1213 10:53:32.918233  396441 logs.go:282] 0 containers: []
	W1213 10:53:32.918240  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:32.918248  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:32.918271  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:32.982730  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:32.974018   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.974750   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.976353   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.976815   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.978478   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:32.974018   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.974750   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.976353   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.976815   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:32.978478   16851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:32.982749  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:32.982759  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:33.055443  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:33.055464  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:33.092574  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:33.092592  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:33.159246  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:33.159268  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:35.674110  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:35.683841  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:35.683897  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:35.708708  396441 cri.go:89] found id: ""
	I1213 10:53:35.708722  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.708729  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:35.708735  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:35.708792  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:35.733638  396441 cri.go:89] found id: ""
	I1213 10:53:35.733652  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.733659  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:35.733665  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:35.733725  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:35.759232  396441 cri.go:89] found id: ""
	I1213 10:53:35.759246  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.759254  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:35.759259  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:35.759318  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:35.787542  396441 cri.go:89] found id: ""
	I1213 10:53:35.787557  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.787564  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:35.787569  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:35.787625  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:35.811703  396441 cri.go:89] found id: ""
	I1213 10:53:35.811716  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.811724  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:35.811729  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:35.811786  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:35.837035  396441 cri.go:89] found id: ""
	I1213 10:53:35.837049  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.837057  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:35.837062  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:35.837121  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:35.863392  396441 cri.go:89] found id: ""
	I1213 10:53:35.863406  396441 logs.go:282] 0 containers: []
	W1213 10:53:35.863414  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:35.863421  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:35.863431  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:35.928750  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:35.928771  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:35.943680  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:35.943696  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:36.014992  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:36.001506   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.002280   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.004784   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.005213   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.007095   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:36.001506   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.002280   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.004784   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.005213   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:36.007095   16960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:36.015006  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:36.015018  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:36.088705  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:36.088726  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:38.618865  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:38.628567  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:38.628627  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:38.657828  396441 cri.go:89] found id: ""
	I1213 10:53:38.657842  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.657853  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:38.657859  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:38.657916  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:38.686067  396441 cri.go:89] found id: ""
	I1213 10:53:38.686081  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.686088  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:38.686093  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:38.686148  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:38.723682  396441 cri.go:89] found id: ""
	I1213 10:53:38.723696  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.723703  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:38.723709  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:38.723764  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:38.749537  396441 cri.go:89] found id: ""
	I1213 10:53:38.749552  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.749559  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:38.749564  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:38.749617  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:38.774109  396441 cri.go:89] found id: ""
	I1213 10:53:38.774129  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.774136  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:38.774141  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:38.774198  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:38.799225  396441 cri.go:89] found id: ""
	I1213 10:53:38.799239  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.799263  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:38.799269  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:38.799323  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:38.828154  396441 cri.go:89] found id: ""
	I1213 10:53:38.828168  396441 logs.go:282] 0 containers: []
	W1213 10:53:38.828176  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:38.828183  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:38.828192  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:38.892547  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:38.892565  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:38.907245  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:38.907267  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:38.971825  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:38.963507   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.964137   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.965780   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.966348   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.968042   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:38.963507   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.964137   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.965780   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.966348   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:38.968042   17064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:38.971835  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:38.971847  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:39.041005  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:39.041026  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:41.575691  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:41.585703  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:41.585767  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:41.611468  396441 cri.go:89] found id: ""
	I1213 10:53:41.611482  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.611490  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:41.611495  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:41.611582  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:41.637775  396441 cri.go:89] found id: ""
	I1213 10:53:41.637790  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.637797  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:41.637802  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:41.637865  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:41.666669  396441 cri.go:89] found id: ""
	I1213 10:53:41.666683  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.666691  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:41.666696  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:41.666750  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:41.691305  396441 cri.go:89] found id: ""
	I1213 10:53:41.691328  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.691336  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:41.691341  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:41.691403  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:41.716485  396441 cri.go:89] found id: ""
	I1213 10:53:41.716506  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.716514  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:41.716519  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:41.716576  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:41.745432  396441 cri.go:89] found id: ""
	I1213 10:53:41.745446  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.745453  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:41.745458  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:41.745515  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:41.770118  396441 cri.go:89] found id: ""
	I1213 10:53:41.770131  396441 logs.go:282] 0 containers: []
	W1213 10:53:41.770138  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:41.770156  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:41.770165  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:41.799454  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:41.799470  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:41.863838  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:41.863858  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:41.878805  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:41.878821  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:41.944990  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:41.935691   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.936395   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.938023   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.938699   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.940322   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:41.935691   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.936395   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.938023   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.938699   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:41.940322   17180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:41.945000  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:41.945011  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:44.513654  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:44.523863  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 10:53:44.523923  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 10:53:44.556878  396441 cri.go:89] found id: ""
	I1213 10:53:44.556891  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.556912  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 10:53:44.556917  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 10:53:44.556984  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 10:53:44.592098  396441 cri.go:89] found id: ""
	I1213 10:53:44.592111  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.592128  396441 logs.go:284] No container was found matching "etcd"
	I1213 10:53:44.592133  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 10:53:44.592200  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 10:53:44.620862  396441 cri.go:89] found id: ""
	I1213 10:53:44.620875  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.620883  396441 logs.go:284] No container was found matching "coredns"
	I1213 10:53:44.620898  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 10:53:44.620965  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 10:53:44.652601  396441 cri.go:89] found id: ""
	I1213 10:53:44.652615  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.652622  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 10:53:44.652627  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 10:53:44.652683  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 10:53:44.678239  396441 cri.go:89] found id: ""
	I1213 10:53:44.678253  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.678269  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 10:53:44.678275  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 10:53:44.678340  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 10:53:44.703917  396441 cri.go:89] found id: ""
	I1213 10:53:44.703930  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.703938  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 10:53:44.703943  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 10:53:44.704002  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 10:53:44.730484  396441 cri.go:89] found id: ""
	I1213 10:53:44.730497  396441 logs.go:282] 0 containers: []
	W1213 10:53:44.730505  396441 logs.go:284] No container was found matching "kindnet"
	I1213 10:53:44.730523  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 10:53:44.730538  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 10:53:44.744828  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 10:53:44.744844  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 10:53:44.809441  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 10:53:44.801057   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.801582   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.803183   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.803696   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.805516   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 10:53:44.801057   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.801582   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.803183   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.803696   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 10:53:44.805516   17268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 10:53:44.809451  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 10:53:44.809463  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 10:53:44.877771  396441 logs.go:123] Gathering logs for container status ...
	I1213 10:53:44.877793  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 10:53:44.911088  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 10:53:44.911103  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 10:53:47.481207  396441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 10:53:47.491256  396441 kubeadm.go:602] duration metric: took 4m3.474830683s to restartPrimaryControlPlane
	W1213 10:53:47.491316  396441 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 10:53:47.491392  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 10:53:47.914152  396441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:53:47.926543  396441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 10:53:47.934327  396441 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:53:47.934378  396441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:53:47.941688  396441 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:53:47.941697  396441 kubeadm.go:158] found existing configuration files:
	
	I1213 10:53:47.941743  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:53:47.949173  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:53:47.949232  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:53:47.956350  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:53:47.963878  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:53:47.963941  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:53:47.971122  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:53:47.978729  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:53:47.978780  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:53:47.985856  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:53:47.993466  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:53:47.993519  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:53:48.001100  396441 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:53:48.045742  396441 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:53:48.045801  396441 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:53:48.119066  396441 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:53:48.119144  396441 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:53:48.119191  396441 kubeadm.go:319] OS: Linux
	I1213 10:53:48.119235  396441 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:53:48.119293  396441 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:53:48.119348  396441 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:53:48.119396  396441 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:53:48.119453  396441 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:53:48.119544  396441 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:53:48.119589  396441 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:53:48.119648  396441 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:53:48.119703  396441 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:53:48.191760  396441 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:53:48.191864  396441 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:53:48.191953  396441 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:53:48.199827  396441 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:53:48.203364  396441 out.go:252]   - Generating certificates and keys ...
	I1213 10:53:48.203457  396441 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:53:48.203575  396441 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:53:48.203646  396441 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:53:48.203710  396441 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:53:48.203925  396441 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:53:48.203983  396441 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:53:48.204042  396441 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:53:48.204098  396441 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:53:48.204167  396441 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:53:48.204241  396441 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:53:48.204278  396441 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:53:48.204329  396441 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:53:48.358581  396441 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:53:48.732777  396441 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:53:49.132208  396441 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:53:49.321084  396441 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:53:49.412268  396441 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:53:49.412908  396441 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:53:49.417021  396441 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:53:49.420254  396441 out.go:252]   - Booting up control plane ...
	I1213 10:53:49.420359  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:53:49.420477  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:53:49.421364  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:53:49.437192  396441 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:53:49.437314  396441 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:53:49.445560  396441 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:53:49.445850  396441 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:53:49.446065  396441 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:53:49.579988  396441 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:53:49.580095  396441 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 10:57:49.575955  396441 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000564023s
	I1213 10:57:49.575972  396441 kubeadm.go:319] 
	I1213 10:57:49.576025  396441 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 10:57:49.576055  396441 kubeadm.go:319] 	- The kubelet is not running
	I1213 10:57:49.576153  396441 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 10:57:49.576156  396441 kubeadm.go:319] 
	I1213 10:57:49.576253  396441 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 10:57:49.576282  396441 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 10:57:49.576311  396441 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 10:57:49.576314  396441 kubeadm.go:319] 
	I1213 10:57:49.584496  396441 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 10:57:49.584979  396441 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 10:57:49.585109  396441 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 10:57:49.585360  396441 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 10:57:49.585367  396441 kubeadm.go:319] 
	I1213 10:57:49.585449  396441 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 10:57:49.585544  396441 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000564023s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 10:57:49.585636  396441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 10:57:50.015805  396441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 10:57:50.030733  396441 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 10:57:50.030794  396441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 10:57:50.040503  396441 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 10:57:50.040514  396441 kubeadm.go:158] found existing configuration files:
	
	I1213 10:57:50.040573  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1213 10:57:50.049098  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 10:57:50.049158  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 10:57:50.057150  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1213 10:57:50.066557  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 10:57:50.066659  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 10:57:50.074920  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1213 10:57:50.083448  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 10:57:50.083507  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 10:57:50.092213  396441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1213 10:57:50.100606  396441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 10:57:50.100667  396441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 10:57:50.108705  396441 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 10:57:50.150598  396441 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 10:57:50.150922  396441 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 10:57:50.222346  396441 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 10:57:50.222407  396441 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 10:57:50.222441  396441 kubeadm.go:319] OS: Linux
	I1213 10:57:50.222482  396441 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 10:57:50.222526  396441 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 10:57:50.222570  396441 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 10:57:50.222621  396441 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 10:57:50.222666  396441 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 10:57:50.222718  396441 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 10:57:50.222760  396441 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 10:57:50.222804  396441 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 10:57:50.222847  396441 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 10:57:50.290176  396441 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 10:57:50.290279  396441 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 10:57:50.290370  396441 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 10:57:50.297738  396441 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 10:57:50.303127  396441 out.go:252]   - Generating certificates and keys ...
	I1213 10:57:50.303239  396441 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 10:57:50.303307  396441 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 10:57:50.303384  396441 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 10:57:50.303444  396441 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 10:57:50.303589  396441 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 10:57:50.303642  396441 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 10:57:50.303705  396441 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 10:57:50.303769  396441 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 10:57:50.303843  396441 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 10:57:50.303915  396441 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 10:57:50.303952  396441 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 10:57:50.304007  396441 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 10:57:50.552022  396441 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 10:57:50.900706  396441 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 10:57:50.944600  396441 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 10:57:51.426451  396441 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 10:57:51.746824  396441 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 10:57:51.747542  396441 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 10:57:51.750376  396441 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 10:57:51.753437  396441 out.go:252]   - Booting up control plane ...
	I1213 10:57:51.753548  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 10:57:51.753629  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 10:57:51.754233  396441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 10:57:51.768926  396441 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 10:57:51.769192  396441 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 10:57:51.780537  396441 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 10:57:51.780629  396441 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 10:57:51.780668  396441 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 10:57:51.907080  396441 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 10:57:51.907187  396441 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:01:51.907939  396441 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001143765s
	I1213 11:01:51.907957  396441 kubeadm.go:319] 
	I1213 11:01:51.908010  396441 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:01:51.908040  396441 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:01:51.908138  396441 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:01:51.908141  396441 kubeadm.go:319] 
	I1213 11:01:51.908238  396441 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:01:51.908267  396441 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:01:51.908295  396441 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:01:51.908298  396441 kubeadm.go:319] 
	I1213 11:01:51.911942  396441 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:01:51.912375  396441 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:01:51.912489  396441 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:01:51.912750  396441 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:01:51.912759  396441 kubeadm.go:319] 
	I1213 11:01:51.912853  396441 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:01:51.912889  396441 kubeadm.go:403] duration metric: took 12m7.937442674s to StartCluster
	I1213 11:01:51.912920  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:01:51.912979  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:01:51.938530  396441 cri.go:89] found id: ""
	I1213 11:01:51.938545  396441 logs.go:282] 0 containers: []
	W1213 11:01:51.938552  396441 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:01:51.938558  396441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:01:51.938614  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:01:51.963977  396441 cri.go:89] found id: ""
	I1213 11:01:51.963991  396441 logs.go:282] 0 containers: []
	W1213 11:01:51.963998  396441 logs.go:284] No container was found matching "etcd"
	I1213 11:01:51.964003  396441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:01:51.964062  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:01:51.988936  396441 cri.go:89] found id: ""
	I1213 11:01:51.988951  396441 logs.go:282] 0 containers: []
	W1213 11:01:51.988958  396441 logs.go:284] No container was found matching "coredns"
	I1213 11:01:51.988963  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:01:51.989016  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:01:52.019417  396441 cri.go:89] found id: ""
	I1213 11:01:52.019431  396441 logs.go:282] 0 containers: []
	W1213 11:01:52.019439  396441 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:01:52.019444  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:01:52.019504  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:01:52.046337  396441 cri.go:89] found id: ""
	I1213 11:01:52.046352  396441 logs.go:282] 0 containers: []
	W1213 11:01:52.046360  396441 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:01:52.046365  396441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:01:52.046426  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:01:52.072247  396441 cri.go:89] found id: ""
	I1213 11:01:52.072261  396441 logs.go:282] 0 containers: []
	W1213 11:01:52.072269  396441 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:01:52.072274  396441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:01:52.072335  396441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:01:52.098208  396441 cri.go:89] found id: ""
	I1213 11:01:52.098222  396441 logs.go:282] 0 containers: []
	W1213 11:01:52.098230  396441 logs.go:284] No container was found matching "kindnet"
	I1213 11:01:52.098238  396441 logs.go:123] Gathering logs for kubelet ...
	I1213 11:01:52.098248  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:01:52.165245  396441 logs.go:123] Gathering logs for dmesg ...
	I1213 11:01:52.165265  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:01:52.179908  396441 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:01:52.179924  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:01:52.245950  396441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:01:52.237532   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.238206   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.239883   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.240475   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.242071   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 11:01:52.237532   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.238206   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.239883   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.240475   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:01:52.242071   21064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:01:52.245965  396441 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:01:52.245974  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:01:52.322777  396441 logs.go:123] Gathering logs for container status ...
	I1213 11:01:52.322795  396441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 11:01:52.353497  396441 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001143765s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 11:01:52.353528  396441 out.go:285] * 
	W1213 11:01:52.353591  396441 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001143765s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:01:52.353607  396441 out.go:285] * 
	W1213 11:01:52.355785  396441 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:01:52.362615  396441 out.go:203] 
	W1213 11:01:52.366304  396441 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001143765s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:01:52.366353  396441 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 11:01:52.366376  396441 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 11:01:52.369563  396441 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.43259327Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432628568Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432669931Z" level=info msg="Create NRI interface"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432773423Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432782531Z" level=info msg="runtime interface created"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432793805Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432800656Z" level=info msg="runtime interface starting up..."
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432807844Z" level=info msg="starting plugins..."
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432820907Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432883414Z" level=info msg="No systemd watchdog enabled"
	Dec 13 10:49:42 functional-407525 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.19567159Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=c8401471-cf55-4e91-8c5f-25a7803eeff9 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.1966268Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=72a9b02f-646a-4554-ae9a-9e3da3b7ad0c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.197123888Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=9caf3dbd-ac4b-4ee0-a136-15962b2eeea0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.197584529Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=86fa4638-cc37-45ef-b1b9-31efae43690d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.198007073Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=37f9bdfd-077a-4751-a897-e7c971db1d6b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.198454331Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=f02d4db1-79bc-4d79-9072-497dd5c75d43 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.198871681Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=a0158e10-bee2-405d-9643-45512681023c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.293525942Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3fa6c343-c4b6-41b8-a772-00d9ff9f481b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.294225272Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=f29d3de7-c9c2-4c34-9a76-76647c28c359 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.294692649Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=115a2b32-9e68-43c7-90af-1d4450976368 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.295176544Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=cce5b0a2-af51-4974-8c4f-26d3aadd70cb name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.295829785Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=bba9558c-4301-4576-890b-64bddc5af9b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.296320695Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=59bc3a50-c36c-4024-8506-47dbb78201d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.296784429Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=97458369-23f9-4acf-a127-9b41f30c00a3 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:03:43.348842   22530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:03:43.349509   22530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:03:43.351104   22530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:03:43.351437   22530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:03:43.352956   22530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	[Dec13 10:24] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:25] overlayfs: idmapped layers are currently not supported
	[  +0.081323] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:31] overlayfs: idmapped layers are currently not supported
	[Dec13 10:32] overlayfs: idmapped layers are currently not supported
	[Dec13 10:42] hrtimer: interrupt took 21684953 ns
	[Dec13 10:49] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 11:03:43 up  2:46,  0 user,  load average: 0.17, 0.18, 0.39
	Linux functional-407525 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 11:03:41 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:03:41 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1107.
	Dec 13 11:03:41 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:03:41 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:03:41 functional-407525 kubelet[22419]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:03:41 functional-407525 kubelet[22419]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:03:41 functional-407525 kubelet[22419]: E1213 11:03:41.840818   22419 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:03:41 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:03:41 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:03:42 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1108.
	Dec 13 11:03:42 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:03:42 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:03:42 functional-407525 kubelet[22440]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:03:42 functional-407525 kubelet[22440]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:03:42 functional-407525 kubelet[22440]: E1213 11:03:42.587396   22440 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:03:42 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:03:42 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:03:43 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1109.
	Dec 13 11:03:43 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:03:43 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:03:43 functional-407525 kubelet[22523]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:03:43 functional-407525 kubelet[22523]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:03:43 functional-407525 kubelet[22523]: E1213 11:03:43.329872   22523 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:03:43 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:03:43 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525: exit status 2 (378.073605ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-407525" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 11:02:10.577039  356328 retry.go:31] will retry after 3.577174746s: Temporary Error: Get "http://10.98.134.106": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 11:02:24.155580  356328 retry.go:31] will retry after 5.511113972s: Temporary Error: Get "http://10.98.134.106": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1213 11:02:27.930748  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 11:02:39.667280  356328 retry.go:31] will retry after 3.858211119s: Temporary Error: Get "http://10.98.134.106": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 11:02:53.525945  356328 retry.go:31] will retry after 7.542869584s: Temporary Error: Get "http://10.98.134.106": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1213 11:03:11.070057  356328 retry.go:31] will retry after 20.438204685s: Temporary Error: Get "http://10.98.134.106": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1213 11:05:31.001131  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525: exit status 2 (335.272776ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-407525" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-407525
helpers_test.go:244: (dbg) docker inspect functional-407525:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	        "Created": "2025-12-13T10:34:59.162458661Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 385126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:34:59.230276401Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hostname",
	        "HostsPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hosts",
	        "LogPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7-json.log",
	        "Name": "/functional-407525",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-407525:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-407525",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	                "LowerDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-407525",
	                "Source": "/var/lib/docker/volumes/functional-407525/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-407525",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-407525",
	                "name.minikube.sigs.k8s.io": "functional-407525",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb8c72e3de62f4751cebe2c5a489ec3040a7f771c4c912b4414d5eb26c67d8e4",
	            "SandboxKey": "/var/run/docker/netns/fb8c72e3de62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-407525": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:c5:1d:c8:5d:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8bb3fce07852261971da0e26f4e28c90471b6da820443a0b657c0bf09d2f7042",
	                    "EndpointID": "3a907b06ccc449fc18f0cf71710374046514d7011757e3e81bb1c73b267fe8c9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-407525",
	                        "7fc3d6bd328a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525: exit status 2 (315.662257ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-407525 image load --daemon kicbase/echo-server:functional-407525 --alsologtostderr                                                             │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ image          │ functional-407525 image ls                                                                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ image          │ functional-407525 image save kicbase/echo-server:functional-407525 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ image          │ functional-407525 image rm kicbase/echo-server:functional-407525 --alsologtostderr                                                                        │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ image          │ functional-407525 image ls                                                                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ image          │ functional-407525 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ image          │ functional-407525 image ls                                                                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ image          │ functional-407525 image save --daemon kicbase/echo-server:functional-407525 --alsologtostderr                                                             │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ ssh            │ functional-407525 ssh sudo cat /etc/test/nested/copy/356328/hosts                                                                                         │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ ssh            │ functional-407525 ssh sudo cat /etc/ssl/certs/356328.pem                                                                                                  │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ ssh            │ functional-407525 ssh sudo cat /usr/share/ca-certificates/356328.pem                                                                                      │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ ssh            │ functional-407525 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ ssh            │ functional-407525 ssh sudo cat /etc/ssl/certs/3563282.pem                                                                                                 │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ ssh            │ functional-407525 ssh sudo cat /usr/share/ca-certificates/3563282.pem                                                                                     │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ ssh            │ functional-407525 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ image          │ functional-407525 image ls --format short --alsologtostderr                                                                                               │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ image          │ functional-407525 image ls --format yaml --alsologtostderr                                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ ssh            │ functional-407525 ssh pgrep buildkitd                                                                                                                     │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │                     │
	│ image          │ functional-407525 image build -t localhost/my-image:functional-407525 testdata/build --alsologtostderr                                                    │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ image          │ functional-407525 image ls                                                                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ image          │ functional-407525 image ls --format json --alsologtostderr                                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ image          │ functional-407525 image ls --format table --alsologtostderr                                                                                               │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ update-context │ functional-407525 update-context --alsologtostderr -v=2                                                                                                   │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ update-context │ functional-407525 update-context --alsologtostderr -v=2                                                                                                   │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	│ update-context │ functional-407525 update-context --alsologtostderr -v=2                                                                                                   │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:04 UTC │ 13 Dec 25 11:04 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:03:59
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:03:59.259655  413709 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:03:59.259777  413709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:03:59.259815  413709 out.go:374] Setting ErrFile to fd 2...
	I1213 11:03:59.259828  413709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:03:59.260495  413709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:03:59.261014  413709 out.go:368] Setting JSON to false
	I1213 11:03:59.261908  413709 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9992,"bootTime":1765613848,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:03:59.262052  413709 start.go:143] virtualization:  
	I1213 11:03:59.265224  413709 out.go:179] * [functional-407525] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:03:59.267327  413709 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:03:59.267393  413709 notify.go:221] Checking for updates...
	I1213 11:03:59.272993  413709 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:03:59.275780  413709 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:03:59.278640  413709 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:03:59.281443  413709 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:03:59.284249  413709 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:03:59.287678  413709 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:03:59.288244  413709 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:03:59.310820  413709 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:03:59.310948  413709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:03:59.373554  413709 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:03:59.36434928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:03:59.373661  413709 docker.go:319] overlay module found
	I1213 11:03:59.376646  413709 out.go:179] * Using the docker driver based on existing profile
	I1213 11:03:59.379458  413709 start.go:309] selected driver: docker
	I1213 11:03:59.379478  413709 start.go:927] validating driver "docker" against &{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:03:59.379619  413709 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:03:59.379724  413709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:03:59.442090  413709 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:03:59.432223878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:03:59.442555  413709 cni.go:84] Creating CNI manager for ""
	I1213 11:03:59.442618  413709 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:03:59.442659  413709 start.go:353] cluster config:
	{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:03:59.445807  413709 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.293525942Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3fa6c343-c4b6-41b8-a772-00d9ff9f481b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.294225272Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=f29d3de7-c9c2-4c34-9a76-76647c28c359 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.294692649Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=115a2b32-9e68-43c7-90af-1d4450976368 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.295176544Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=cce5b0a2-af51-4974-8c4f-26d3aadd70cb name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.295829785Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=bba9558c-4301-4576-890b-64bddc5af9b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.296320695Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=59bc3a50-c36c-4024-8506-47dbb78201d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.296784429Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=97458369-23f9-4acf-a127-9b41f30c00a3 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:04:04 functional-407525 crio[9900]: time="2025-12-13T11:04:04.682055775Z" level=info msg="Checking image status: kicbase/echo-server:functional-407525" id=c11a9fb2-5139-4a9e-8b96-219ee75041c1 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:04:04 functional-407525 crio[9900]: time="2025-12-13T11:04:04.682287679Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 13 11:04:04 functional-407525 crio[9900]: time="2025-12-13T11:04:04.682348078Z" level=info msg="Image kicbase/echo-server:functional-407525 not found" id=c11a9fb2-5139-4a9e-8b96-219ee75041c1 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:04:04 functional-407525 crio[9900]: time="2025-12-13T11:04:04.682428284Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-407525 found" id=c11a9fb2-5139-4a9e-8b96-219ee75041c1 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:04:04 functional-407525 crio[9900]: time="2025-12-13T11:04:04.706548848Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-407525" id=3f0e6c12-f776-41cd-8efa-184786b26d98 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:04:04 functional-407525 crio[9900]: time="2025-12-13T11:04:04.706690577Z" level=info msg="Image docker.io/kicbase/echo-server:functional-407525 not found" id=3f0e6c12-f776-41cd-8efa-184786b26d98 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:04:04 functional-407525 crio[9900]: time="2025-12-13T11:04:04.706732153Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-407525 found" id=3f0e6c12-f776-41cd-8efa-184786b26d98 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:04:04 functional-407525 crio[9900]: time="2025-12-13T11:04:04.730728728Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-407525" id=638be546-80ec-4f34-9a55-1090ce0311ad name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:04:04 functional-407525 crio[9900]: time="2025-12-13T11:04:04.730895598Z" level=info msg="Image localhost/kicbase/echo-server:functional-407525 not found" id=638be546-80ec-4f34-9a55-1090ce0311ad name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:04:04 functional-407525 crio[9900]: time="2025-12-13T11:04:04.730935172Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-407525 found" id=638be546-80ec-4f34-9a55-1090ce0311ad name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:04:07 functional-407525 crio[9900]: time="2025-12-13T11:04:07.80699566Z" level=info msg="Checking image status: kicbase/echo-server:functional-407525" id=eaffb8df-b21f-4d46-b8e0-03f4465d3ae3 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:04:07 functional-407525 crio[9900]: time="2025-12-13T11:04:07.807195834Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 13 11:04:07 functional-407525 crio[9900]: time="2025-12-13T11:04:07.807257062Z" level=info msg="Image kicbase/echo-server:functional-407525 not found" id=eaffb8df-b21f-4d46-b8e0-03f4465d3ae3 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:04:07 functional-407525 crio[9900]: time="2025-12-13T11:04:07.807358068Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-407525 found" id=eaffb8df-b21f-4d46-b8e0-03f4465d3ae3 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:04:07 functional-407525 crio[9900]: time="2025-12-13T11:04:07.833079445Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-407525" id=0970b25c-9779-44d4-80c2-e98d20bde03e name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:04:07 functional-407525 crio[9900]: time="2025-12-13T11:04:07.833240997Z" level=info msg="Image docker.io/kicbase/echo-server:functional-407525 not found" id=0970b25c-9779-44d4-80c2-e98d20bde03e name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:04:07 functional-407525 crio[9900]: time="2025-12-13T11:04:07.833292649Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-407525 found" id=0970b25c-9779-44d4-80c2-e98d20bde03e name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:04:07 functional-407525 crio[9900]: time="2025-12-13T11:04:07.858038041Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-407525" id=6b204647-6664-457e-865e-bcda04163fe6 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:06:02.277202   25347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:06:02.277659   25347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:06:02.279187   25347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:06:02.279659   25347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:06:02.281110   25347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	[Dec13 10:24] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:25] overlayfs: idmapped layers are currently not supported
	[  +0.081323] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:31] overlayfs: idmapped layers are currently not supported
	[Dec13 10:32] overlayfs: idmapped layers are currently not supported
	[Dec13 10:42] hrtimer: interrupt took 21684953 ns
	[Dec13 10:49] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 11:06:02 up  2:48,  0 user,  load average: 0.31, 0.32, 0.42
	Linux functional-407525 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 11:05:59 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:06:00 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1292.
	Dec 13 11:06:00 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:06:00 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:06:00 functional-407525 kubelet[25219]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:06:00 functional-407525 kubelet[25219]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:06:00 functional-407525 kubelet[25219]: E1213 11:06:00.573878   25219 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:06:00 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:06:00 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:06:01 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1293.
	Dec 13 11:06:01 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:06:01 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:06:01 functional-407525 kubelet[25237]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:06:01 functional-407525 kubelet[25237]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:06:01 functional-407525 kubelet[25237]: E1213 11:06:01.320047   25237 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:06:01 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:06:01 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:06:02 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1294.
	Dec 13 11:06:02 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:06:02 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:06:02 functional-407525 kubelet[25299]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:06:02 functional-407525 kubelet[25299]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:06:02 functional-407525 kubelet[25299]: E1213 11:06:02.095950   25299 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:06:02 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:06:02 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525: exit status 2 (323.132735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-407525" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-407525 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-407525 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (61.111204ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-407525 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-407525
helpers_test.go:244: (dbg) docker inspect functional-407525:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	        "Created": "2025-12-13T10:34:59.162458661Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 385126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T10:34:59.230276401Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hostname",
	        "HostsPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/hosts",
	        "LogPath": "/var/lib/docker/containers/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7/7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7-json.log",
	        "Name": "/functional-407525",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-407525:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-407525",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7fc3d6bd328ad727c5fdba39f6b4e89c84786625b1be9b0e87c3b588bb34daf7",
	                "LowerDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33796b67f8b4c3f1295e92bebbd15116562726cb15b5e32ef086ea84071ae50d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-407525",
	                "Source": "/var/lib/docker/volumes/functional-407525/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-407525",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-407525",
	                "name.minikube.sigs.k8s.io": "functional-407525",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb8c72e3de62f4751cebe2c5a489ec3040a7f771c4c912b4414d5eb26c67d8e4",
	            "SandboxKey": "/var/run/docker/netns/fb8c72e3de62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-407525": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:c5:1d:c8:5d:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8bb3fce07852261971da0e26f4e28c90471b6da820443a0b657c0bf09d2f7042",
	                    "EndpointID": "3a907b06ccc449fc18f0cf71710374046514d7011757e3e81bb1c73b267fe8c9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-407525",
	                        "7fc3d6bd328a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-407525 -n functional-407525: exit status 2 (307.551144ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-407525 service hello-node --url                                                                                                          │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ ssh       │ functional-407525 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ mount     │ -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1371719630/001:/mount-9p --alsologtostderr -v=1              │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ ssh       │ functional-407525 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ ssh       │ functional-407525 ssh -- ls -la /mount-9p                                                                                                           │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ ssh       │ functional-407525 ssh cat /mount-9p/test-1765623829395310073                                                                                        │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ ssh       │ functional-407525 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ ssh       │ functional-407525 ssh sudo umount -f /mount-9p                                                                                                      │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ mount     │ -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1621853940/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ ssh       │ functional-407525 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ ssh       │ functional-407525 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ ssh       │ functional-407525 ssh -- ls -la /mount-9p                                                                                                           │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ ssh       │ functional-407525 ssh sudo umount -f /mount-9p                                                                                                      │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ mount     │ -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1733713432/001:/mount1 --alsologtostderr -v=1                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ mount     │ -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1733713432/001:/mount3 --alsologtostderr -v=1                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ ssh       │ functional-407525 ssh findmnt -T /mount1                                                                                                            │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ mount     │ -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1733713432/001:/mount2 --alsologtostderr -v=1                │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ ssh       │ functional-407525 ssh findmnt -T /mount1                                                                                                            │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ ssh       │ functional-407525 ssh findmnt -T /mount2                                                                                                            │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ ssh       │ functional-407525 ssh findmnt -T /mount3                                                                                                            │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │ 13 Dec 25 11:03 UTC │
	│ mount     │ -p functional-407525 --kill=true                                                                                                                    │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ start     │ -p functional-407525 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ start     │ -p functional-407525 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ start     │ -p functional-407525 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-407525 --alsologtostderr -v=1                                                                                      │ functional-407525 │ jenkins │ v1.37.0 │ 13 Dec 25 11:03 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:03:59
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:03:59.259655  413709 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:03:59.259777  413709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:03:59.259815  413709 out.go:374] Setting ErrFile to fd 2...
	I1213 11:03:59.259828  413709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:03:59.260495  413709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:03:59.261014  413709 out.go:368] Setting JSON to false
	I1213 11:03:59.261908  413709 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9992,"bootTime":1765613848,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:03:59.262052  413709 start.go:143] virtualization:  
	I1213 11:03:59.265224  413709 out.go:179] * [functional-407525] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:03:59.267327  413709 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:03:59.267393  413709 notify.go:221] Checking for updates...
	I1213 11:03:59.272993  413709 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:03:59.275780  413709 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:03:59.278640  413709 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:03:59.281443  413709 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:03:59.284249  413709 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:03:59.287678  413709 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:03:59.288244  413709 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:03:59.310820  413709 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:03:59.310948  413709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:03:59.373554  413709 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:03:59.36434928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:03:59.373661  413709 docker.go:319] overlay module found
	I1213 11:03:59.376646  413709 out.go:179] * Using the docker driver based on existing profile
	I1213 11:03:59.379458  413709 start.go:309] selected driver: docker
	I1213 11:03:59.379478  413709 start.go:927] validating driver "docker" against &{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:03:59.379619  413709 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:03:59.379724  413709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:03:59.442090  413709 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:03:59.432223878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:03:59.442555  413709 cni.go:84] Creating CNI manager for ""
	I1213 11:03:59.442618  413709 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:03:59.442659  413709 start.go:353] cluster config:
	{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:03:59.445807  413709 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.43259327Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432628568Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432669931Z" level=info msg="Create NRI interface"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432773423Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432782531Z" level=info msg="runtime interface created"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432793805Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432800656Z" level=info msg="runtime interface starting up..."
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432807844Z" level=info msg="starting plugins..."
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432820907Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 10:49:42 functional-407525 crio[9900]: time="2025-12-13T10:49:42.432883414Z" level=info msg="No systemd watchdog enabled"
	Dec 13 10:49:42 functional-407525 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.19567159Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=c8401471-cf55-4e91-8c5f-25a7803eeff9 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.1966268Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=72a9b02f-646a-4554-ae9a-9e3da3b7ad0c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.197123888Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=9caf3dbd-ac4b-4ee0-a136-15962b2eeea0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.197584529Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=86fa4638-cc37-45ef-b1b9-31efae43690d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.198007073Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=37f9bdfd-077a-4751-a897-e7c971db1d6b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.198454331Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=f02d4db1-79bc-4d79-9072-497dd5c75d43 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:53:48 functional-407525 crio[9900]: time="2025-12-13T10:53:48.198871681Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=a0158e10-bee2-405d-9643-45512681023c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.293525942Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3fa6c343-c4b6-41b8-a772-00d9ff9f481b name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.294225272Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=f29d3de7-c9c2-4c34-9a76-76647c28c359 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.294692649Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=115a2b32-9e68-43c7-90af-1d4450976368 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.295176544Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=cce5b0a2-af51-4974-8c4f-26d3aadd70cb name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.295829785Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=bba9558c-4301-4576-890b-64bddc5af9b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.296320695Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=59bc3a50-c36c-4024-8506-47dbb78201d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 10:57:50 functional-407525 crio[9900]: time="2025-12-13T10:57:50.296784429Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=97458369-23f9-4acf-a127-9b41f30c00a3 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 11:04:02.557168   23531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:04:02.557800   23531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:04:02.559607   23531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:04:02.560162   23531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1213 11:04:02.561739   23531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.400796] kauditd_printk_skb: 36 callbacks suppressed
	[Dec13 10:24] kauditd_printk_skb: 8 callbacks suppressed
	[Dec13 10:25] overlayfs: idmapped layers are currently not supported
	[  +0.081323] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec13 10:31] overlayfs: idmapped layers are currently not supported
	[Dec13 10:32] overlayfs: idmapped layers are currently not supported
	[Dec13 10:42] hrtimer: interrupt took 21684953 ns
	[Dec13 10:49] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 11:04:02 up  2:46,  0 user,  load average: 0.42, 0.23, 0.41
	Linux functional-407525 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 11:03:59 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:04:00 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1132.
	Dec 13 11:04:00 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:04:00 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:04:00 functional-407525 kubelet[23317]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:04:00 functional-407525 kubelet[23317]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:04:00 functional-407525 kubelet[23317]: E1213 11:04:00.614672   23317 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:04:00 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:04:00 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:04:01 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1133.
	Dec 13 11:04:01 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:04:01 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:04:01 functional-407525 kubelet[23412]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:04:01 functional-407525 kubelet[23412]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:04:01 functional-407525 kubelet[23412]: E1213 11:04:01.328486   23412 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:04:01 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:04:01 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:04:02 functional-407525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1134.
	Dec 13 11:04:02 functional-407525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:04:02 functional-407525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:04:02 functional-407525 kubelet[23447]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:04:02 functional-407525 kubelet[23447]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:04:02 functional-407525 kubelet[23447]: E1213 11:04:02.079055   23447 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:04:02 functional-407525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:04:02 functional-407525 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-407525 -n functional-407525: exit status 2 (309.129844ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-407525" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-407525 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-407525 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1213 11:01:59.945633  409449 out.go:360] Setting OutFile to fd 1 ...
I1213 11:01:59.946622  409449 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 11:01:59.946666  409449 out.go:374] Setting ErrFile to fd 2...
I1213 11:01:59.946691  409449 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 11:01:59.947011  409449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
I1213 11:01:59.947984  409449 mustload.go:66] Loading cluster: functional-407525
I1213 11:01:59.948574  409449 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 11:01:59.949457  409449 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
I1213 11:02:00.004320  409449 host.go:66] Checking if "functional-407525" exists ...
I1213 11:02:00.004682  409449 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 11:02:00.128304  409449 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:02:00.115103603 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 11:02:00.128459  409449 api_server.go:166] Checking apiserver status ...
I1213 11:02:00.128525  409449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1213 11:02:00.128570  409449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
I1213 11:02:00.183614  409449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
W1213 11:02:00.337188  409449 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1213 11:02:00.340688  409449 out.go:179] * The control-plane node functional-407525 apiserver is not running: (state=Stopped)
I1213 11:02:00.343763  409449 out.go:179]   To start a cluster, run: "minikube start -p functional-407525"

                                                
                                                
stdout: * The control-plane node functional-407525 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-407525"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-407525 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 409450: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-407525 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-407525 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-407525 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-407525 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-407525 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-407525 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-407525 apply -f testdata/testsvc.yaml: exit status 1 (106.177348ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-407525 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (100.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.98.134.106": Temporary Error: Get "http://10.98.134.106": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-407525 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-407525 get svc nginx-svc: exit status 1 (58.624671ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-407525 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (100.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-407525 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-407525 create deployment hello-node --image kicbase/echo-server: exit status 1 (53.814414ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-407525 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-407525 service list: exit status 103 (259.127597ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-407525 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-407525"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-407525 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-407525 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-407525\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-407525 service list -o json: exit status 103 (265.815388ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-407525 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-407525"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-407525 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-407525 service --namespace=default --https --url hello-node: exit status 103 (326.670294ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-407525 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-407525"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-407525 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-407525 service hello-node --url --format={{.IP}}: exit status 103 (305.563758ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-407525 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-407525"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-407525 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-407525 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-407525\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-407525 service hello-node --url: exit status 103 (284.00304ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-407525 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-407525"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-407525 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-407525 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-407525"
functional_test.go:1579: failed to parse "* The control-plane node functional-407525 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-407525\"": parse "* The control-plane node functional-407525 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-407525\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1371719630/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765623829395310073" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1371719630/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765623829395310073" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1371719630/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765623829395310073" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1371719630/001/test-1765623829395310073
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-407525 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (327.011852ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 11:03:49.722953  356328 retry.go:31] will retry after 564.438245ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 11:03 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 11:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 11:03 test-1765623829395310073
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh cat /mount-9p/test-1765623829395310073
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-407525 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-407525 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (59.050949ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-407525 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-407525 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (280.697866ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=37229)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 13 11:03 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 13 11:03 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 13 11:03 test-1765623829395310073
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-407525 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1371719630/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1371719630/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1371719630/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:37229
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1371719630/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1371719630/001:/mount-9p --alsologtostderr -v=1] stderr:
I1213 11:03:49.460171  411768 out.go:360] Setting OutFile to fd 1 ...
I1213 11:03:49.460533  411768 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 11:03:49.460562  411768 out.go:374] Setting ErrFile to fd 2...
I1213 11:03:49.460588  411768 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 11:03:49.460982  411768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
I1213 11:03:49.461361  411768 mustload.go:66] Loading cluster: functional-407525
I1213 11:03:49.461847  411768 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 11:03:49.462560  411768 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
I1213 11:03:49.483930  411768 host.go:66] Checking if "functional-407525" exists ...
I1213 11:03:49.484296  411768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 11:03:49.570600  411768 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:03:49.558647267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 11:03:49.570806  411768 cli_runner.go:164] Run: docker network inspect functional-407525 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 11:03:49.595293  411768 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1371719630/001 into VM as /mount-9p ...
I1213 11:03:49.598305  411768 out.go:179]   - Mount type:   9p
I1213 11:03:49.601106  411768 out.go:179]   - User ID:      docker
I1213 11:03:49.603988  411768 out.go:179]   - Group ID:     docker
I1213 11:03:49.606988  411768 out.go:179]   - Version:      9p2000.L
I1213 11:03:49.609852  411768 out.go:179]   - Message Size: 262144
I1213 11:03:49.612916  411768 out.go:179]   - Options:      map[]
I1213 11:03:49.615904  411768 out.go:179]   - Bind Address: 192.168.49.1:37229
I1213 11:03:49.618827  411768 out.go:179] * Userspace file server: 
I1213 11:03:49.619159  411768 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1213 11:03:49.619252  411768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
I1213 11:03:49.639449  411768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
I1213 11:03:49.746366  411768 mount.go:180] unmount for /mount-9p ran successfully
I1213 11:03:49.746395  411768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1213 11:03:49.754876  411768 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=37229,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1213 11:03:49.765441  411768 main.go:127] stdlog: ufs.go:141 connected
I1213 11:03:49.765605  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tversion tag 65535 msize 262144 version '9P2000.L'
I1213 11:03:49.765648  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rversion tag 65535 msize 262144 version '9P2000'
I1213 11:03:49.765884  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1213 11:03:49.765940  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rattach tag 0 aqid (ed6f10 1761938d 'd')
I1213 11:03:49.766600  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tstat tag 0 fid 0
I1213 11:03:49.766688  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6f10 1761938d 'd') m d775 at 0 mt 1765623829 l 4096 t 0 d 0 ext )
I1213 11:03:49.770513  411768 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/.mount-process: {Name:mk3d046374952ea547ed391f0a5ab0f709898323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 11:03:49.770732  411768 mount.go:105] mount successful: ""
I1213 11:03:49.774058  411768 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1371719630/001 to /mount-9p
I1213 11:03:49.776874  411768 out.go:203] 
I1213 11:03:49.779661  411768 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1213 11:03:50.823870  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tstat tag 0 fid 0
I1213 11:03:50.823954  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6f10 1761938d 'd') m d775 at 0 mt 1765623829 l 4096 t 0 d 0 ext )
I1213 11:03:50.824279  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Twalk tag 0 fid 0 newfid 1 
I1213 11:03:50.824312  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rwalk tag 0 
I1213 11:03:50.824406  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Topen tag 0 fid 1 mode 0
I1213 11:03:50.824450  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Ropen tag 0 qid (ed6f10 1761938d 'd') iounit 0
I1213 11:03:50.824543  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tstat tag 0 fid 0
I1213 11:03:50.824574  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6f10 1761938d 'd') m d775 at 0 mt 1765623829 l 4096 t 0 d 0 ext )
I1213 11:03:50.824696  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tread tag 0 fid 1 offset 0 count 262120
I1213 11:03:50.824820  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rread tag 0 count 258
I1213 11:03:50.824973  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tread tag 0 fid 1 offset 258 count 261862
I1213 11:03:50.825020  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rread tag 0 count 0
I1213 11:03:50.825156  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tread tag 0 fid 1 offset 258 count 262120
I1213 11:03:50.825198  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rread tag 0 count 0
I1213 11:03:50.825337  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1213 11:03:50.825377  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rwalk tag 0 (ed6f11 1761938d '') 
I1213 11:03:50.825488  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tstat tag 0 fid 2
I1213 11:03:50.825552  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6f11 1761938d '') m 644 at 0 mt 1765623829 l 24 t 0 d 0 ext )
I1213 11:03:50.825673  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tstat tag 0 fid 2
I1213 11:03:50.825721  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6f11 1761938d '') m 644 at 0 mt 1765623829 l 24 t 0 d 0 ext )
I1213 11:03:50.825855  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tclunk tag 0 fid 2
I1213 11:03:50.825894  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rclunk tag 0
I1213 11:03:50.826032  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Twalk tag 0 fid 0 newfid 2 0:'test-1765623829395310073' 
I1213 11:03:50.826069  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rwalk tag 0 (ed6f13 1761938d '') 
I1213 11:03:50.826182  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tstat tag 0 fid 2
I1213 11:03:50.826240  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rstat tag 0 st ('test-1765623829395310073' 'jenkins' 'jenkins' '' q (ed6f13 1761938d '') m 644 at 0 mt 1765623829 l 24 t 0 d 0 ext )
I1213 11:03:50.826371  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tstat tag 0 fid 2
I1213 11:03:50.826407  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rstat tag 0 st ('test-1765623829395310073' 'jenkins' 'jenkins' '' q (ed6f13 1761938d '') m 644 at 0 mt 1765623829 l 24 t 0 d 0 ext )
I1213 11:03:50.826521  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tclunk tag 0 fid 2
I1213 11:03:50.826554  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rclunk tag 0
I1213 11:03:50.826674  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1213 11:03:50.826711  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rwalk tag 0 (ed6f12 1761938d '') 
I1213 11:03:50.826838  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tstat tag 0 fid 2
I1213 11:03:50.826895  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6f12 1761938d '') m 644 at 0 mt 1765623829 l 24 t 0 d 0 ext )
I1213 11:03:50.827004  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tstat tag 0 fid 2
I1213 11:03:50.827039  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6f12 1761938d '') m 644 at 0 mt 1765623829 l 24 t 0 d 0 ext )
I1213 11:03:50.827148  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tclunk tag 0 fid 2
I1213 11:03:50.827174  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rclunk tag 0
I1213 11:03:50.827320  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tread tag 0 fid 1 offset 258 count 262120
I1213 11:03:50.827363  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rread tag 0 count 0
I1213 11:03:50.827502  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tclunk tag 0 fid 1
I1213 11:03:50.827596  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rclunk tag 0
I1213 11:03:51.116021  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Twalk tag 0 fid 0 newfid 1 0:'test-1765623829395310073' 
I1213 11:03:51.116095  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rwalk tag 0 (ed6f13 1761938d '') 
I1213 11:03:51.116257  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tstat tag 0 fid 1
I1213 11:03:51.116301  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rstat tag 0 st ('test-1765623829395310073' 'jenkins' 'jenkins' '' q (ed6f13 1761938d '') m 644 at 0 mt 1765623829 l 24 t 0 d 0 ext )
I1213 11:03:51.116443  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Twalk tag 0 fid 1 newfid 2 
I1213 11:03:51.116490  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rwalk tag 0 
I1213 11:03:51.116647  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Topen tag 0 fid 2 mode 0
I1213 11:03:51.116723  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Ropen tag 0 qid (ed6f13 1761938d '') iounit 0
I1213 11:03:51.116883  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tstat tag 0 fid 1
I1213 11:03:51.116925  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rstat tag 0 st ('test-1765623829395310073' 'jenkins' 'jenkins' '' q (ed6f13 1761938d '') m 644 at 0 mt 1765623829 l 24 t 0 d 0 ext )
I1213 11:03:51.117069  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tread tag 0 fid 2 offset 0 count 262120
I1213 11:03:51.117109  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rread tag 0 count 24
I1213 11:03:51.117247  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tread tag 0 fid 2 offset 24 count 262120
I1213 11:03:51.117277  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rread tag 0 count 0
I1213 11:03:51.117417  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tread tag 0 fid 2 offset 24 count 262120
I1213 11:03:51.117451  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rread tag 0 count 0
I1213 11:03:51.117625  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tclunk tag 0 fid 2
I1213 11:03:51.117656  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rclunk tag 0
I1213 11:03:51.117853  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tclunk tag 0 fid 1
I1213 11:03:51.117880  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rclunk tag 0
I1213 11:03:51.458460  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tstat tag 0 fid 0
I1213 11:03:51.458557  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6f10 1761938d 'd') m d775 at 0 mt 1765623829 l 4096 t 0 d 0 ext )
I1213 11:03:51.458913  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Twalk tag 0 fid 0 newfid 1 
I1213 11:03:51.458948  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rwalk tag 0 
I1213 11:03:51.459069  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Topen tag 0 fid 1 mode 0
I1213 11:03:51.459132  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Ropen tag 0 qid (ed6f10 1761938d 'd') iounit 0
I1213 11:03:51.459301  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tstat tag 0 fid 0
I1213 11:03:51.459337  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6f10 1761938d 'd') m d775 at 0 mt 1765623829 l 4096 t 0 d 0 ext )
I1213 11:03:51.459500  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tread tag 0 fid 1 offset 0 count 262120
I1213 11:03:51.459628  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rread tag 0 count 258
I1213 11:03:51.459786  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tread tag 0 fid 1 offset 258 count 261862
I1213 11:03:51.459819  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rread tag 0 count 0
I1213 11:03:51.459947  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tread tag 0 fid 1 offset 258 count 262120
I1213 11:03:51.459980  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rread tag 0 count 0
I1213 11:03:51.460142  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1213 11:03:51.460182  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rwalk tag 0 (ed6f11 1761938d '') 
I1213 11:03:51.460303  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tstat tag 0 fid 2
I1213 11:03:51.460340  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6f11 1761938d '') m 644 at 0 mt 1765623829 l 24 t 0 d 0 ext )
I1213 11:03:51.460478  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tstat tag 0 fid 2
I1213 11:03:51.460508  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6f11 1761938d '') m 644 at 0 mt 1765623829 l 24 t 0 d 0 ext )
I1213 11:03:51.460651  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tclunk tag 0 fid 2
I1213 11:03:51.460678  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rclunk tag 0
I1213 11:03:51.460826  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Twalk tag 0 fid 0 newfid 2 0:'test-1765623829395310073' 
I1213 11:03:51.460886  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rwalk tag 0 (ed6f13 1761938d '') 
I1213 11:03:51.461048  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tstat tag 0 fid 2
I1213 11:03:51.461111  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rstat tag 0 st ('test-1765623829395310073' 'jenkins' 'jenkins' '' q (ed6f13 1761938d '') m 644 at 0 mt 1765623829 l 24 t 0 d 0 ext )
I1213 11:03:51.461268  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tstat tag 0 fid 2
I1213 11:03:51.461307  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rstat tag 0 st ('test-1765623829395310073' 'jenkins' 'jenkins' '' q (ed6f13 1761938d '') m 644 at 0 mt 1765623829 l 24 t 0 d 0 ext )
I1213 11:03:51.461457  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tclunk tag 0 fid 2
I1213 11:03:51.461495  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rclunk tag 0
I1213 11:03:51.461620  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1213 11:03:51.461666  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rwalk tag 0 (ed6f12 1761938d '') 
I1213 11:03:51.461810  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tstat tag 0 fid 2
I1213 11:03:51.461863  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6f12 1761938d '') m 644 at 0 mt 1765623829 l 24 t 0 d 0 ext )
I1213 11:03:51.462011  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tstat tag 0 fid 2
I1213 11:03:51.462056  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6f12 1761938d '') m 644 at 0 mt 1765623829 l 24 t 0 d 0 ext )
I1213 11:03:51.462203  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tclunk tag 0 fid 2
I1213 11:03:51.462239  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rclunk tag 0
I1213 11:03:51.462405  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tread tag 0 fid 1 offset 258 count 262120
I1213 11:03:51.462439  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rread tag 0 count 0
I1213 11:03:51.462577  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tclunk tag 0 fid 1
I1213 11:03:51.462608  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rclunk tag 0
I1213 11:03:51.464100  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1213 11:03:51.464185  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rerror tag 0 ename 'file not found' ecode 0
I1213 11:03:51.752656  411768 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:54284 Tclunk tag 0 fid 0
I1213 11:03:51.752711  411768 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:54284 Rclunk tag 0
I1213 11:03:51.753791  411768 main.go:127] stdlog: ufs.go:147 disconnected
I1213 11:03:51.776030  411768 out.go:179] * Unmounting /mount-9p ...
I1213 11:03:51.778932  411768 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1213 11:03:51.785994  411768 mount.go:180] unmount for /mount-9p ran successfully
I1213 11:03:51.786103  411768 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/.mount-process: {Name:mk3d046374952ea547ed391f0a5ab0f709898323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 11:03:51.789316  411768 out.go:203] 
W1213 11:03:51.792369  411768 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1213 11:03:51.795328  411768 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.51s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-615758 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-615758 --output=json --user=testUser: exit status 80 (2.508326773s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"55d9ab7f-40d4-4614-9e61-b72df6f85a32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-615758 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"6c673e17-e2df-43db-a2da-c795b389a4be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-13T11:16:32Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"2da91834-f036-4d85-b66a-82c2db7418fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-615758 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.51s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.92s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-615758 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-615758 --output=json --user=testUser: exit status 80 (1.918308062s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"30f06ffb-c324-4187-a38e-bbe99c843e28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-615758 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"379c7e41-eb68-4a06-b04e-f0f0481ed5da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-13T11:16:34Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"817748a0-2e2d-4d95-9239-92fa4c54e247","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-615758 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (791.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-854588 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-854588 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.468639175s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-854588
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-854588: (1.555270815s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-854588 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-854588 status --format={{.Host}}: exit status 7 (134.38959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-854588 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-854588 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m19.945347033s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-854588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-854588" primary control-plane node in "kubernetes-upgrade-854588" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:33:40.236430  536580 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:33:40.237040  536580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:33:40.237072  536580 out.go:374] Setting ErrFile to fd 2...
	I1213 11:33:40.237095  536580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:33:40.237400  536580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:33:40.237812  536580 out.go:368] Setting JSON to false
	I1213 11:33:40.238814  536580 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11773,"bootTime":1765613848,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:33:40.238983  536580 start.go:143] virtualization:  
	I1213 11:33:40.245955  536580 out.go:179] * [kubernetes-upgrade-854588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:33:40.249058  536580 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:33:40.249155  536580 notify.go:221] Checking for updates...
	I1213 11:33:40.254975  536580 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:33:40.257939  536580 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:33:40.260778  536580 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:33:40.263722  536580 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:33:40.266674  536580 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:33:40.269942  536580 config.go:182] Loaded profile config "kubernetes-upgrade-854588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 11:33:40.270566  536580 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:33:40.315901  536580 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:33:40.316023  536580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:33:40.393847  536580 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:33:40.377214987 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:33:40.393955  536580 docker.go:319] overlay module found
	I1213 11:33:40.397109  536580 out.go:179] * Using the docker driver based on existing profile
	I1213 11:33:40.399829  536580 start.go:309] selected driver: docker
	I1213 11:33:40.399858  536580 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-854588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-854588 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:33:40.399977  536580 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:33:40.400727  536580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:33:40.505833  536580 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:33:40.495046591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:33:40.506154  536580 cni.go:84] Creating CNI manager for ""
	I1213 11:33:40.506204  536580 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:33:40.506241  536580 start.go:353] cluster config:
	{Name:kubernetes-upgrade-854588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-854588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:33:40.509886  536580 out.go:179] * Starting "kubernetes-upgrade-854588" primary control-plane node in "kubernetes-upgrade-854588" cluster
	I1213 11:33:40.512898  536580 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:33:40.515895  536580 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:33:40.519711  536580 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:33:40.519764  536580 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 11:33:40.519778  536580 cache.go:65] Caching tarball of preloaded images
	I1213 11:33:40.519864  536580 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 11:33:40.519880  536580 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 11:33:40.520004  536580 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kubernetes-upgrade-854588/config.json ...
	I1213 11:33:40.520227  536580 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:33:40.544623  536580 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:33:40.544642  536580 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:33:40.544657  536580 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:33:40.544688  536580 start.go:360] acquireMachinesLock for kubernetes-upgrade-854588: {Name:mk5b213407b3b8d434e6d822db40d571db5b8d5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:33:40.544741  536580 start.go:364] duration metric: took 35.972µs to acquireMachinesLock for "kubernetes-upgrade-854588"
	I1213 11:33:40.544759  536580 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:33:40.544765  536580 fix.go:54] fixHost starting: 
	I1213 11:33:40.545030  536580 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-854588 --format={{.State.Status}}
	I1213 11:33:40.582675  536580 fix.go:112] recreateIfNeeded on kubernetes-upgrade-854588: state=Stopped err=<nil>
	W1213 11:33:40.582710  536580 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:33:40.586185  536580 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-854588" ...
	I1213 11:33:40.586289  536580 cli_runner.go:164] Run: docker start kubernetes-upgrade-854588
	I1213 11:33:40.917459  536580 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-854588 --format={{.State.Status}}
	I1213 11:33:40.945708  536580 kic.go:430] container "kubernetes-upgrade-854588" state is running.
	I1213 11:33:40.946082  536580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-854588
	I1213 11:33:40.969692  536580 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kubernetes-upgrade-854588/config.json ...
	I1213 11:33:40.969932  536580 machine.go:94] provisionDockerMachine start ...
	I1213 11:33:40.969991  536580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-854588
	I1213 11:33:40.993870  536580 main.go:143] libmachine: Using SSH client type: native
	I1213 11:33:40.994391  536580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1213 11:33:40.994405  536580 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:33:40.995163  536580 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 11:33:44.151318  536580 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-854588
	
	I1213 11:33:44.151347  536580 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-854588"
	I1213 11:33:44.151413  536580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-854588
	I1213 11:33:44.169172  536580 main.go:143] libmachine: Using SSH client type: native
	I1213 11:33:44.169498  536580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1213 11:33:44.169518  536580 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-854588 && echo "kubernetes-upgrade-854588" | sudo tee /etc/hostname
	I1213 11:33:44.329865  536580 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-854588
	
	I1213 11:33:44.330019  536580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-854588
	I1213 11:33:44.348700  536580 main.go:143] libmachine: Using SSH client type: native
	I1213 11:33:44.349016  536580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1213 11:33:44.349041  536580 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-854588' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-854588/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-854588' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:33:44.500036  536580 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:33:44.500068  536580 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 11:33:44.500089  536580 ubuntu.go:190] setting up certificates
	I1213 11:33:44.500108  536580 provision.go:84] configureAuth start
	I1213 11:33:44.500172  536580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-854588
	I1213 11:33:44.522796  536580 provision.go:143] copyHostCerts
	I1213 11:33:44.522878  536580 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 11:33:44.522892  536580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 11:33:44.522973  536580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 11:33:44.523077  536580 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 11:33:44.523088  536580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 11:33:44.523115  536580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 11:33:44.523176  536580 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 11:33:44.523186  536580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 11:33:44.523210  536580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 11:33:44.523261  536580 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-854588 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-854588 localhost minikube]
	I1213 11:33:44.637488  536580 provision.go:177] copyRemoteCerts
	I1213 11:33:44.637555  536580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:33:44.637604  536580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-854588
	I1213 11:33:44.654932  536580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/kubernetes-upgrade-854588/id_rsa Username:docker}
	I1213 11:33:44.759364  536580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:33:44.778127  536580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1213 11:33:44.796798  536580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:33:44.815150  536580 provision.go:87] duration metric: took 315.027804ms to configureAuth
	I1213 11:33:44.815193  536580 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:33:44.815373  536580 config.go:182] Loaded profile config "kubernetes-upgrade-854588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:33:44.815473  536580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-854588
	I1213 11:33:44.833168  536580 main.go:143] libmachine: Using SSH client type: native
	I1213 11:33:44.833482  536580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1213 11:33:44.833500  536580 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:33:45.271246  536580 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:33:45.271315  536580 machine.go:97] duration metric: took 4.301371823s to provisionDockerMachine
	I1213 11:33:45.271336  536580 start.go:293] postStartSetup for "kubernetes-upgrade-854588" (driver="docker")
	I1213 11:33:45.271349  536580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:33:45.271429  536580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:33:45.271476  536580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-854588
	I1213 11:33:45.292553  536580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/kubernetes-upgrade-854588/id_rsa Username:docker}
	I1213 11:33:45.399613  536580 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:33:45.403019  536580 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:33:45.403048  536580 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:33:45.403060  536580 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 11:33:45.403113  536580 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 11:33:45.403196  536580 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 11:33:45.403299  536580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:33:45.410899  536580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:33:45.428138  536580 start.go:296] duration metric: took 156.787109ms for postStartSetup
	I1213 11:33:45.428263  536580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:33:45.428338  536580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-854588
	I1213 11:33:45.445166  536580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/kubernetes-upgrade-854588/id_rsa Username:docker}
	I1213 11:33:45.549462  536580 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:33:45.554338  536580 fix.go:56] duration metric: took 5.009564953s for fixHost
	I1213 11:33:45.554364  536580 start.go:83] releasing machines lock for "kubernetes-upgrade-854588", held for 5.009614643s
	I1213 11:33:45.554431  536580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-854588
	I1213 11:33:45.571396  536580 ssh_runner.go:195] Run: cat /version.json
	I1213 11:33:45.571455  536580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-854588
	I1213 11:33:45.571407  536580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:33:45.571739  536580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-854588
	I1213 11:33:45.594375  536580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/kubernetes-upgrade-854588/id_rsa Username:docker}
	I1213 11:33:45.596361  536580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/kubernetes-upgrade-854588/id_rsa Username:docker}
	I1213 11:33:45.803985  536580 ssh_runner.go:195] Run: systemctl --version
	I1213 11:33:45.810512  536580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:33:45.846484  536580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:33:45.851002  536580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:33:45.851074  536580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:33:45.860762  536580 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 11:33:45.860786  536580 start.go:496] detecting cgroup driver to use...
	I1213 11:33:45.860837  536580 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:33:45.860911  536580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:33:45.876472  536580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:33:45.889896  536580 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:33:45.889962  536580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:33:45.905871  536580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:33:45.918964  536580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:33:46.042403  536580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:33:46.183146  536580 docker.go:234] disabling docker service ...
	I1213 11:33:46.183227  536580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:33:46.201660  536580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:33:46.219410  536580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:33:46.365336  536580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:33:46.527825  536580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:33:46.551578  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:33:46.581827  536580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 11:33:46.581906  536580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:33:46.594168  536580 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:33:46.594234  536580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:33:46.606369  536580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:33:46.618085  536580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:33:46.630137  536580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:33:46.641583  536580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:33:46.654497  536580 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:33:46.665804  536580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:33:46.678228  536580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:33:46.686731  536580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:33:46.695224  536580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:46.847929  536580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:33:47.053303  536580 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:33:47.053374  536580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:33:47.057547  536580 start.go:564] Will wait 60s for crictl version
	I1213 11:33:47.057613  536580 ssh_runner.go:195] Run: which crictl
	I1213 11:33:47.061941  536580 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:33:47.089006  536580 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:33:47.089091  536580 ssh_runner.go:195] Run: crio --version
	I1213 11:33:47.119505  536580 ssh_runner.go:195] Run: crio --version
	I1213 11:33:47.154610  536580 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 11:33:47.157615  536580 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-854588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:33:47.179912  536580 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 11:33:47.184084  536580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:33:47.200037  536580 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-854588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-854588 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:33:47.200168  536580 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:33:47.200232  536580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:33:47.250858  536580 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1213 11:33:47.250931  536580 ssh_runner.go:195] Run: which lz4
	I1213 11:33:47.254954  536580 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1213 11:33:47.259093  536580 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1213 11:33:47.259127  536580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (306100841 bytes)
	I1213 11:33:49.112779  536580 crio.go:462] duration metric: took 1.85786251s to copy over tarball
	I1213 11:33:49.112854  536580 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1213 11:33:51.332198  536580 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.219314602s)
	I1213 11:33:51.332223  536580 crio.go:469] duration metric: took 2.2194151s to extract the tarball
	I1213 11:33:51.332231  536580 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1213 11:33:51.410114  536580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:33:51.447683  536580 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:33:51.447705  536580 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:33:51.447713  536580 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 11:33:51.447819  536580 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-854588 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-854588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:33:51.447898  536580 ssh_runner.go:195] Run: crio config
	I1213 11:33:51.525234  536580 cni.go:84] Creating CNI manager for ""
	I1213 11:33:51.525263  536580 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:33:51.525288  536580 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:33:51.525311  536580 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-854588 NodeName:kubernetes-upgrade-854588 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:33:51.525443  536580 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-854588"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:33:51.525522  536580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:33:51.535694  536580 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:33:51.535769  536580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:33:51.544684  536580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1213 11:33:51.558144  536580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:33:51.570284  536580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1213 11:33:51.584092  536580 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:33:51.594127  536580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:33:51.605996  536580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:33:51.724403  536580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:33:51.742662  536580 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kubernetes-upgrade-854588 for IP: 192.168.76.2
	I1213 11:33:51.742729  536580 certs.go:195] generating shared ca certs ...
	I1213 11:33:51.742760  536580 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:51.742944  536580 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:33:51.743022  536580 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:33:51.743048  536580 certs.go:257] generating profile certs ...
	I1213 11:33:51.743171  536580 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kubernetes-upgrade-854588/client.key
	I1213 11:33:51.743305  536580 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kubernetes-upgrade-854588/apiserver.key.a85edcc0
	I1213 11:33:51.743386  536580 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kubernetes-upgrade-854588/proxy-client.key
	I1213 11:33:51.743590  536580 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:33:51.743646  536580 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:33:51.743661  536580 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:33:51.743690  536580 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:33:51.743719  536580 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:33:51.743750  536580 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:33:51.743802  536580 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:33:51.747540  536580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:33:51.770292  536580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:33:51.794447  536580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:33:51.818794  536580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:33:51.842607  536580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kubernetes-upgrade-854588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1213 11:33:51.872729  536580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kubernetes-upgrade-854588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 11:33:51.898852  536580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kubernetes-upgrade-854588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:33:51.922365  536580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kubernetes-upgrade-854588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:33:51.945116  536580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:33:51.965634  536580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:33:51.982959  536580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:33:52.000583  536580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:33:52.016439  536580 ssh_runner.go:195] Run: openssl version
	I1213 11:33:52.023097  536580 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:33:52.032350  536580 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:33:52.040181  536580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:33:52.044419  536580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:33:52.044536  536580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:33:52.086856  536580 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:33:52.095249  536580 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:33:52.102915  536580 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:33:52.110879  536580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:33:52.114723  536580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:33:52.114818  536580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:33:52.156730  536580 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:33:52.163974  536580 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:33:52.171205  536580 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:33:52.178353  536580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:33:52.181974  536580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:33:52.182034  536580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:33:52.223853  536580 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:33:52.231287  536580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:33:52.235177  536580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:33:52.276342  536580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:33:52.317908  536580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:33:52.359069  536580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:33:52.405856  536580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:33:52.446496  536580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:33:52.488453  536580 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-854588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-854588 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:33:52.488549  536580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:33:52.488615  536580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:33:52.517416  536580 cri.go:89] found id: ""
	I1213 11:33:52.517538  536580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:33:52.527452  536580 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 11:33:52.527561  536580 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 11:33:52.527639  536580 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:33:52.536199  536580 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:33:52.536704  536580 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-854588" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:33:52.536870  536580 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-354468/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-854588" cluster setting kubeconfig missing "kubernetes-upgrade-854588" context setting]
	I1213 11:33:52.537231  536580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:33:52.551504  536580 kapi.go:59] client config for kubernetes-upgrade-854588: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kubernetes-upgrade-854588/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kubernetes-upgrade-854588/client.key", CAFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 11:33:52.552237  536580 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 11:33:52.552349  536580 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 11:33:52.552393  536580 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 11:33:52.552415  536580 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 11:33:52.552433  536580 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 11:33:52.552750  536580 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:33:52.588415  536580 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-13 11:33:13.281099283 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-13 11:33:51.577471655 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-854588"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1213 11:33:52.588438  536580 kubeadm.go:1161] stopping kube-system containers ...
	I1213 11:33:52.588450  536580 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1213 11:33:52.588525  536580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:33:52.615810  536580 cri.go:89] found id: ""
	I1213 11:33:52.615885  536580 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1213 11:33:52.629628  536580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:33:52.637515  536580 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec 13 11:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 13 11:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 13 11:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec 13 11:33 /etc/kubernetes/scheduler.conf
	
	I1213 11:33:52.637621  536580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:33:52.645580  536580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:33:52.653673  536580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:33:52.661483  536580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:33:52.661594  536580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:33:52.669934  536580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:33:52.677656  536580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:33:52.677768  536580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:33:52.686250  536580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:33:52.695222  536580 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 11:33:52.743032  536580 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 11:33:53.786430  536580 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.043310627s)
	I1213 11:33:53.786488  536580 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1213 11:33:54.057339  536580 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1213 11:33:54.172757  536580 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1213 11:33:54.236282  536580 api_server.go:52] waiting for apiserver process to appear ...
	I1213 11:33:54.236349  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:33:54.737169  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:33:55.236598  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:33:55.737328  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:33:56.236493  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:33:56.736542  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:33:57.236918  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:33:57.737277  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:33:58.236800  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:33:58.736732  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:33:59.236790  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:33:59.736879  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:00.236744  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:00.736709  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:01.236587  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:01.737458  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:02.237345  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:02.737122  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:03.236615  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:03.737587  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:04.237421  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:04.737377  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:05.237344  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:05.737370  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:06.237299  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:06.736535  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:07.236950  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:07.736686  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:08.237133  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:08.736486  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:09.236749  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:09.736614  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:10.237239  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:10.737006  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:11.236523  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:11.737440  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:12.236881  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:12.737062  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:13.236927  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:13.737274  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:14.236859  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:14.736809  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:15.236548  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:15.736609  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:16.237427  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:16.737158  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:17.237190  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:17.737284  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:18.237370  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:18.736466  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:19.236830  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:19.737334  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:20.236564  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:20.736535  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:21.236521  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:21.737109  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:22.236539  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:22.737464  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:23.237186  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:23.736880  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:24.236709  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:24.737459  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:25.237426  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:25.737491  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:26.236598  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:26.737418  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:27.236552  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:27.737411  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:28.236536  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:28.737089  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:29.236710  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:29.737289  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:30.236822  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:30.737296  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:31.236557  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:31.736431  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:32.236958  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:32.736758  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:33.236483  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:33.736563  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:34.237286  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:34.736533  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:35.237416  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:35.736594  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:36.237228  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:36.737210  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:37.236448  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:37.737204  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:38.236603  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:38.737479  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:39.237369  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:39.736561  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:40.237177  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:40.736552  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:41.236582  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:41.737153  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:42.236532  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:42.736744  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:43.236499  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:43.736519  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:44.236561  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:44.736519  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:45.236556  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:45.737073  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:46.237295  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:46.736610  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:47.236503  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:47.736508  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:48.237352  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:48.736610  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:49.236745  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:49.737505  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:50.237341  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:50.736726  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:51.237437  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:51.736868  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:52.236540  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:52.737290  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:53.236853  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:53.736769  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:54.237434  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:34:54.237551  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:34:54.267063  536580 cri.go:89] found id: ""
	I1213 11:34:54.267099  536580 logs.go:282] 0 containers: []
	W1213 11:34:54.267108  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:34:54.267115  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:34:54.267172  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:34:54.299203  536580 cri.go:89] found id: ""
	I1213 11:34:54.299231  536580 logs.go:282] 0 containers: []
	W1213 11:34:54.299241  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:34:54.299247  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:34:54.299304  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:34:54.325157  536580 cri.go:89] found id: ""
	I1213 11:34:54.325184  536580 logs.go:282] 0 containers: []
	W1213 11:34:54.325195  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:34:54.325201  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:34:54.325266  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:34:54.351724  536580 cri.go:89] found id: ""
	I1213 11:34:54.351750  536580 logs.go:282] 0 containers: []
	W1213 11:34:54.351759  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:34:54.351767  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:34:54.351835  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:34:54.379034  536580 cri.go:89] found id: ""
	I1213 11:34:54.379063  536580 logs.go:282] 0 containers: []
	W1213 11:34:54.379073  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:34:54.379079  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:34:54.379139  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:34:54.408070  536580 cri.go:89] found id: ""
	I1213 11:34:54.408096  536580 logs.go:282] 0 containers: []
	W1213 11:34:54.408106  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:34:54.408112  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:34:54.408169  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:34:54.434197  536580 cri.go:89] found id: ""
	I1213 11:34:54.434225  536580 logs.go:282] 0 containers: []
	W1213 11:34:54.434235  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:34:54.434242  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:34:54.434304  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:34:54.464522  536580 cri.go:89] found id: ""
	I1213 11:34:54.464548  536580 logs.go:282] 0 containers: []
	W1213 11:34:54.464558  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:34:54.464567  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:34:54.464577  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:34:54.525270  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:34:54.525291  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:34:54.525308  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:34:54.559636  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:34:54.559673  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:34:54.590566  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:34:54.590594  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:34:54.662592  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:34:54.662629  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:34:57.179386  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:34:57.189281  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:34:57.189361  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:34:57.214720  536580 cri.go:89] found id: ""
	I1213 11:34:57.214796  536580 logs.go:282] 0 containers: []
	W1213 11:34:57.214820  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:34:57.214838  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:34:57.214926  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:34:57.241507  536580 cri.go:89] found id: ""
	I1213 11:34:57.241532  536580 logs.go:282] 0 containers: []
	W1213 11:34:57.241542  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:34:57.241548  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:34:57.241652  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:34:57.267600  536580 cri.go:89] found id: ""
	I1213 11:34:57.267682  536580 logs.go:282] 0 containers: []
	W1213 11:34:57.267698  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:34:57.267706  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:34:57.267764  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:34:57.293634  536580 cri.go:89] found id: ""
	I1213 11:34:57.293659  536580 logs.go:282] 0 containers: []
	W1213 11:34:57.293669  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:34:57.293676  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:34:57.293736  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:34:57.319388  536580 cri.go:89] found id: ""
	I1213 11:34:57.319415  536580 logs.go:282] 0 containers: []
	W1213 11:34:57.319424  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:34:57.319430  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:34:57.319488  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:34:57.353178  536580 cri.go:89] found id: ""
	I1213 11:34:57.353203  536580 logs.go:282] 0 containers: []
	W1213 11:34:57.353213  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:34:57.353220  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:34:57.353281  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:34:57.382107  536580 cri.go:89] found id: ""
	I1213 11:34:57.382135  536580 logs.go:282] 0 containers: []
	W1213 11:34:57.382144  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:34:57.382151  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:34:57.382206  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:34:57.415290  536580 cri.go:89] found id: ""
	I1213 11:34:57.415316  536580 logs.go:282] 0 containers: []
	W1213 11:34:57.415325  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:34:57.415335  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:34:57.415347  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:34:57.432883  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:34:57.432912  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:34:57.522478  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:34:57.522502  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:34:57.522516  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:34:57.563502  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:34:57.563554  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:34:57.597525  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:34:57.597553  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:35:00.176886  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:00.203166  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:00.203245  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:00.289897  536580 cri.go:89] found id: ""
	I1213 11:35:00.289927  536580 logs.go:282] 0 containers: []
	W1213 11:35:00.289937  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:00.289944  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:00.290023  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:00.367589  536580 cri.go:89] found id: ""
	I1213 11:35:00.367616  536580 logs.go:282] 0 containers: []
	W1213 11:35:00.367626  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:35:00.367634  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:00.367736  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:35:00.403606  536580 cri.go:89] found id: ""
	I1213 11:35:00.403633  536580 logs.go:282] 0 containers: []
	W1213 11:35:00.403643  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:35:00.403650  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:35:00.403728  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:35:00.436814  536580 cri.go:89] found id: ""
	I1213 11:35:00.436844  536580 logs.go:282] 0 containers: []
	W1213 11:35:00.436854  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:35:00.436860  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:35:00.436926  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:35:00.466050  536580 cri.go:89] found id: ""
	I1213 11:35:00.466074  536580 logs.go:282] 0 containers: []
	W1213 11:35:00.466083  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:35:00.466090  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:35:00.466156  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:35:00.495164  536580 cri.go:89] found id: ""
	I1213 11:35:00.495190  536580 logs.go:282] 0 containers: []
	W1213 11:35:00.495199  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:35:00.495206  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:35:00.495267  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:35:00.524364  536580 cri.go:89] found id: ""
	I1213 11:35:00.524390  536580 logs.go:282] 0 containers: []
	W1213 11:35:00.524399  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:35:00.524407  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:35:00.524475  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:35:00.555015  536580 cri.go:89] found id: ""
	I1213 11:35:00.555043  536580 logs.go:282] 0 containers: []
	W1213 11:35:00.555053  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:35:00.555063  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:35:00.555074  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:35:00.629420  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:35:00.629463  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:35:00.646684  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:35:00.646717  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:35:00.717234  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:35:00.717257  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:35:00.717279  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:35:00.748990  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:35:00.749027  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:35:03.286254  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:03.296838  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:03.296907  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:03.323040  536580 cri.go:89] found id: ""
	I1213 11:35:03.323065  536580 logs.go:282] 0 containers: []
	W1213 11:35:03.323074  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:03.323080  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:03.323134  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:03.356397  536580 cri.go:89] found id: ""
	I1213 11:35:03.356424  536580 logs.go:282] 0 containers: []
	W1213 11:35:03.356475  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:35:03.356484  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:03.356543  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:35:03.381560  536580 cri.go:89] found id: ""
	I1213 11:35:03.381636  536580 logs.go:282] 0 containers: []
	W1213 11:35:03.381653  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:35:03.381660  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:35:03.381718  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:35:03.411713  536580 cri.go:89] found id: ""
	I1213 11:35:03.411746  536580 logs.go:282] 0 containers: []
	W1213 11:35:03.411756  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:35:03.411763  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:35:03.411841  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:35:03.439464  536580 cri.go:89] found id: ""
	I1213 11:35:03.439490  536580 logs.go:282] 0 containers: []
	W1213 11:35:03.439499  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:35:03.439506  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:35:03.439604  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:35:03.469377  536580 cri.go:89] found id: ""
	I1213 11:35:03.469407  536580 logs.go:282] 0 containers: []
	W1213 11:35:03.469417  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:35:03.469443  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:35:03.469526  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:35:03.502149  536580 cri.go:89] found id: ""
	I1213 11:35:03.502177  536580 logs.go:282] 0 containers: []
	W1213 11:35:03.502186  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:35:03.502193  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:35:03.502256  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:35:03.530513  536580 cri.go:89] found id: ""
	I1213 11:35:03.530556  536580 logs.go:282] 0 containers: []
	W1213 11:35:03.530565  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:35:03.530575  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:35:03.530587  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:35:03.598114  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:35:03.598155  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:35:03.615935  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:35:03.615968  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:35:03.679060  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:35:03.679080  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:35:03.679093  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:35:03.710638  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:35:03.710670  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:35:06.241145  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:06.251219  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:06.251287  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:06.276857  536580 cri.go:89] found id: ""
	I1213 11:35:06.276886  536580 logs.go:282] 0 containers: []
	W1213 11:35:06.276895  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:06.276901  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:06.276956  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:06.301821  536580 cri.go:89] found id: ""
	I1213 11:35:06.301846  536580 logs.go:282] 0 containers: []
	W1213 11:35:06.301855  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:35:06.301862  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:06.301917  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:35:06.334197  536580 cri.go:89] found id: ""
	I1213 11:35:06.334223  536580 logs.go:282] 0 containers: []
	W1213 11:35:06.334232  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:35:06.334238  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:35:06.334293  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:35:06.359681  536580 cri.go:89] found id: ""
	I1213 11:35:06.359709  536580 logs.go:282] 0 containers: []
	W1213 11:35:06.359717  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:35:06.359724  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:35:06.359780  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:35:06.385723  536580 cri.go:89] found id: ""
	I1213 11:35:06.385750  536580 logs.go:282] 0 containers: []
	W1213 11:35:06.385759  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:35:06.385765  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:35:06.385821  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:35:06.414565  536580 cri.go:89] found id: ""
	I1213 11:35:06.414589  536580 logs.go:282] 0 containers: []
	W1213 11:35:06.414597  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:35:06.414604  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:35:06.414673  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:35:06.440117  536580 cri.go:89] found id: ""
	I1213 11:35:06.440143  536580 logs.go:282] 0 containers: []
	W1213 11:35:06.440153  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:35:06.440159  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:35:06.440265  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:35:06.465451  536580 cri.go:89] found id: ""
	I1213 11:35:06.465477  536580 logs.go:282] 0 containers: []
	W1213 11:35:06.465487  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:35:06.465497  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:35:06.465508  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:35:06.496422  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:35:06.496455  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:35:06.524598  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:35:06.524631  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:35:06.591807  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:35:06.591844  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:35:06.608336  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:35:06.608372  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:35:06.672760  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:35:09.174464  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:09.184394  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:09.184464  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:09.210439  536580 cri.go:89] found id: ""
	I1213 11:35:09.210463  536580 logs.go:282] 0 containers: []
	W1213 11:35:09.210473  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:09.210479  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:09.210539  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:09.237540  536580 cri.go:89] found id: ""
	I1213 11:35:09.237574  536580 logs.go:282] 0 containers: []
	W1213 11:35:09.237583  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:35:09.237590  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:09.237642  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:35:09.263048  536580 cri.go:89] found id: ""
	I1213 11:35:09.263073  536580 logs.go:282] 0 containers: []
	W1213 11:35:09.263082  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:35:09.263090  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:35:09.263153  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:35:09.300549  536580 cri.go:89] found id: ""
	I1213 11:35:09.300575  536580 logs.go:282] 0 containers: []
	W1213 11:35:09.300584  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:35:09.300591  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:35:09.300656  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:35:09.325111  536580 cri.go:89] found id: ""
	I1213 11:35:09.325139  536580 logs.go:282] 0 containers: []
	W1213 11:35:09.325148  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:35:09.325154  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:35:09.325209  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:35:09.356695  536580 cri.go:89] found id: ""
	I1213 11:35:09.356721  536580 logs.go:282] 0 containers: []
	W1213 11:35:09.356729  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:35:09.356736  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:35:09.356791  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:35:09.382682  536580 cri.go:89] found id: ""
	I1213 11:35:09.382706  536580 logs.go:282] 0 containers: []
	W1213 11:35:09.382716  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:35:09.382722  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:35:09.382778  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:35:09.407251  536580 cri.go:89] found id: ""
	I1213 11:35:09.407276  536580 logs.go:282] 0 containers: []
	W1213 11:35:09.407285  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:35:09.407294  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:35:09.407307  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:35:09.423599  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:35:09.423683  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:35:09.509764  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:35:09.509782  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:35:09.509794  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:35:09.541354  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:35:09.541395  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:35:09.571697  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:35:09.571726  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:35:12.142266  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:12.153089  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:12.153228  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:12.185920  536580 cri.go:89] found id: ""
	I1213 11:35:12.185947  536580 logs.go:282] 0 containers: []
	W1213 11:35:12.185957  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:12.185963  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:12.186035  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:12.221489  536580 cri.go:89] found id: ""
	I1213 11:35:12.221589  536580 logs.go:282] 0 containers: []
	W1213 11:35:12.221613  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:35:12.221647  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:12.221742  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:35:12.253888  536580 cri.go:89] found id: ""
	I1213 11:35:12.253909  536580 logs.go:282] 0 containers: []
	W1213 11:35:12.253917  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:35:12.253924  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:35:12.253979  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:35:12.285845  536580 cri.go:89] found id: ""
	I1213 11:35:12.285867  536580 logs.go:282] 0 containers: []
	W1213 11:35:12.285876  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:35:12.285882  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:35:12.285950  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:35:12.337837  536580 cri.go:89] found id: ""
	I1213 11:35:12.337858  536580 logs.go:282] 0 containers: []
	W1213 11:35:12.337867  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:35:12.337873  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:35:12.337927  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:35:12.385973  536580 cri.go:89] found id: ""
	I1213 11:35:12.385994  536580 logs.go:282] 0 containers: []
	W1213 11:35:12.386003  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:35:12.386009  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:35:12.386074  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:35:12.417845  536580 cri.go:89] found id: ""
	I1213 11:35:12.417866  536580 logs.go:282] 0 containers: []
	W1213 11:35:12.417875  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:35:12.417881  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:35:12.417935  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:35:12.453866  536580 cri.go:89] found id: ""
	I1213 11:35:12.453942  536580 logs.go:282] 0 containers: []
	W1213 11:35:12.453966  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:35:12.453988  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:35:12.454029  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:35:12.472326  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:35:12.472412  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:35:12.565790  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:35:12.565861  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:35:12.565900  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:35:12.602776  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:35:12.602855  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:35:12.654320  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:35:12.654345  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:35:15.232544  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:15.242948  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:15.243029  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:15.268358  536580 cri.go:89] found id: ""
	I1213 11:35:15.268383  536580 logs.go:282] 0 containers: []
	W1213 11:35:15.268392  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:15.268398  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:15.268455  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:15.296323  536580 cri.go:89] found id: ""
	I1213 11:35:15.296354  536580 logs.go:282] 0 containers: []
	W1213 11:35:15.296363  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:35:15.296369  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:15.296426  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:35:15.322302  536580 cri.go:89] found id: ""
	I1213 11:35:15.322327  536580 logs.go:282] 0 containers: []
	W1213 11:35:15.322337  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:35:15.322343  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:35:15.322402  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:35:15.350408  536580 cri.go:89] found id: ""
	I1213 11:35:15.350436  536580 logs.go:282] 0 containers: []
	W1213 11:35:15.350445  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:35:15.350452  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:35:15.350516  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:35:15.376499  536580 cri.go:89] found id: ""
	I1213 11:35:15.376526  536580 logs.go:282] 0 containers: []
	W1213 11:35:15.376535  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:35:15.376541  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:35:15.376598  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:35:15.402565  536580 cri.go:89] found id: ""
	I1213 11:35:15.402591  536580 logs.go:282] 0 containers: []
	W1213 11:35:15.402599  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:35:15.402607  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:35:15.402671  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:35:15.427633  536580 cri.go:89] found id: ""
	I1213 11:35:15.427655  536580 logs.go:282] 0 containers: []
	W1213 11:35:15.427664  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:35:15.427670  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:35:15.427731  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:35:15.458651  536580 cri.go:89] found id: ""
	I1213 11:35:15.458675  536580 logs.go:282] 0 containers: []
	W1213 11:35:15.458684  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:35:15.458693  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:35:15.458706  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:35:15.526157  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:35:15.526199  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:35:15.542535  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:35:15.542567  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:35:15.607626  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:35:15.607650  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:35:15.607663  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:35:15.639988  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:35:15.640027  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:35:18.172557  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:18.183136  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:18.183210  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:18.209017  536580 cri.go:89] found id: ""
	I1213 11:35:18.209091  536580 logs.go:282] 0 containers: []
	W1213 11:35:18.209114  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:18.209127  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:18.209199  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:18.236056  536580 cri.go:89] found id: ""
	I1213 11:35:18.236079  536580 logs.go:282] 0 containers: []
	W1213 11:35:18.236088  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:35:18.236123  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:18.236200  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:35:18.261647  536580 cri.go:89] found id: ""
	I1213 11:35:18.261717  536580 logs.go:282] 0 containers: []
	W1213 11:35:18.261740  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:35:18.261759  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:35:18.261856  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:35:18.288668  536580 cri.go:89] found id: ""
	I1213 11:35:18.288738  536580 logs.go:282] 0 containers: []
	W1213 11:35:18.288762  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:35:18.288781  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:35:18.288857  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:35:18.318203  536580 cri.go:89] found id: ""
	I1213 11:35:18.318283  536580 logs.go:282] 0 containers: []
	W1213 11:35:18.318306  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:35:18.318324  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:35:18.318417  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:35:18.350099  536580 cri.go:89] found id: ""
	I1213 11:35:18.350168  536580 logs.go:282] 0 containers: []
	W1213 11:35:18.350191  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:35:18.350214  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:35:18.350300  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:35:18.375884  536580 cri.go:89] found id: ""
	I1213 11:35:18.375907  536580 logs.go:282] 0 containers: []
	W1213 11:35:18.375916  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:35:18.375922  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:35:18.375980  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:35:18.402335  536580 cri.go:89] found id: ""
	I1213 11:35:18.402410  536580 logs.go:282] 0 containers: []
	W1213 11:35:18.402434  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:35:18.402456  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:35:18.402500  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:35:18.469297  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:35:18.469333  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:35:18.485477  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:35:18.485504  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:35:18.552987  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:35:18.553061  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:35:18.553081  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:35:18.584132  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:35:18.584167  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:35:21.113482  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:21.124086  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:21.124166  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:21.164787  536580 cri.go:89] found id: ""
	I1213 11:35:21.164813  536580 logs.go:282] 0 containers: []
	W1213 11:35:21.164840  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:21.164847  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:21.164910  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:21.201911  536580 cri.go:89] found id: ""
	I1213 11:35:21.201934  536580 logs.go:282] 0 containers: []
	W1213 11:35:21.201942  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:35:21.201949  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:21.202006  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:35:21.237535  536580 cri.go:89] found id: ""
	I1213 11:35:21.237558  536580 logs.go:282] 0 containers: []
	W1213 11:35:21.237566  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:35:21.237573  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:35:21.237632  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:35:21.274056  536580 cri.go:89] found id: ""
	I1213 11:35:21.274081  536580 logs.go:282] 0 containers: []
	W1213 11:35:21.274091  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:35:21.274097  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:35:21.274154  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:35:21.307256  536580 cri.go:89] found id: ""
	I1213 11:35:21.307278  536580 logs.go:282] 0 containers: []
	W1213 11:35:21.307287  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:35:21.307294  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:35:21.307350  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:35:21.347622  536580 cri.go:89] found id: ""
	I1213 11:35:21.347643  536580 logs.go:282] 0 containers: []
	W1213 11:35:21.347652  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:35:21.347664  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:35:21.347717  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:35:21.375264  536580 cri.go:89] found id: ""
	I1213 11:35:21.375289  536580 logs.go:282] 0 containers: []
	W1213 11:35:21.375298  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:35:21.375304  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:35:21.375363  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:35:21.404465  536580 cri.go:89] found id: ""
	I1213 11:35:21.404491  536580 logs.go:282] 0 containers: []
	W1213 11:35:21.404500  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:35:21.404512  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:35:21.404525  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:35:21.434858  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:35:21.434892  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:35:21.463682  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:35:21.463712  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:35:21.530096  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:35:21.530133  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:35:21.546091  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:35:21.546119  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:35:21.612515  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:35:24.112738  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:24.124196  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:24.124268  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:24.154955  536580 cri.go:89] found id: ""
	I1213 11:35:24.154978  536580 logs.go:282] 0 containers: []
	W1213 11:35:24.154988  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:24.154995  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:24.155053  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:24.193827  536580 cri.go:89] found id: ""
	I1213 11:35:24.193854  536580 logs.go:282] 0 containers: []
	W1213 11:35:24.193864  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:35:24.193872  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:24.193933  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:35:24.233010  536580 cri.go:89] found id: ""
	I1213 11:35:24.233034  536580 logs.go:282] 0 containers: []
	W1213 11:35:24.233043  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:35:24.233061  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:35:24.233118  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:35:24.275935  536580 cri.go:89] found id: ""
	I1213 11:35:24.275959  536580 logs.go:282] 0 containers: []
	W1213 11:35:24.275980  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:35:24.275987  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:35:24.276054  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:35:24.322125  536580 cri.go:89] found id: ""
	I1213 11:35:24.322158  536580 logs.go:282] 0 containers: []
	W1213 11:35:24.322167  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:35:24.322174  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:35:24.322230  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:35:24.374282  536580 cri.go:89] found id: ""
	I1213 11:35:24.374305  536580 logs.go:282] 0 containers: []
	W1213 11:35:24.374314  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:35:24.374320  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:35:24.374379  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:35:24.415314  536580 cri.go:89] found id: ""
	I1213 11:35:24.415337  536580 logs.go:282] 0 containers: []
	W1213 11:35:24.415346  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:35:24.415353  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:35:24.415408  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:35:24.454071  536580 cri.go:89] found id: ""
	I1213 11:35:24.454096  536580 logs.go:282] 0 containers: []
	W1213 11:35:24.454105  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:35:24.454114  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:35:24.454129  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:35:24.546894  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:35:24.546913  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:35:24.546925  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:35:24.588519  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:35:24.588621  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:35:24.659728  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:35:24.659812  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:35:24.741672  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:35:24.741753  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:35:27.259608  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:27.269662  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:27.269734  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:27.296345  536580 cri.go:89] found id: ""
	I1213 11:35:27.296389  536580 logs.go:282] 0 containers: []
	W1213 11:35:27.296401  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:27.296407  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:27.296472  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:27.324481  536580 cri.go:89] found id: ""
	I1213 11:35:27.324504  536580 logs.go:282] 0 containers: []
	W1213 11:35:27.324514  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:35:27.324521  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:27.324576  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:35:27.353697  536580 cri.go:89] found id: ""
	I1213 11:35:27.353720  536580 logs.go:282] 0 containers: []
	W1213 11:35:27.353729  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:35:27.353735  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:35:27.353795  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:35:27.380440  536580 cri.go:89] found id: ""
	I1213 11:35:27.380463  536580 logs.go:282] 0 containers: []
	W1213 11:35:27.380473  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:35:27.380480  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:35:27.380535  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:35:27.406460  536580 cri.go:89] found id: ""
	I1213 11:35:27.406482  536580 logs.go:282] 0 containers: []
	W1213 11:35:27.406491  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:35:27.406497  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:35:27.406553  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:35:27.432115  536580 cri.go:89] found id: ""
	I1213 11:35:27.432141  536580 logs.go:282] 0 containers: []
	W1213 11:35:27.432149  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:35:27.432156  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:35:27.432220  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:35:27.458320  536580 cri.go:89] found id: ""
	I1213 11:35:27.458343  536580 logs.go:282] 0 containers: []
	W1213 11:35:27.458352  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:35:27.458359  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:35:27.458453  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:35:27.496054  536580 cri.go:89] found id: ""
	I1213 11:35:27.496077  536580 logs.go:282] 0 containers: []
	W1213 11:35:27.496086  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:35:27.496096  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:35:27.496108  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:35:27.574650  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:35:27.576539  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:35:27.598227  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:35:27.598299  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:35:27.692380  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:35:27.692441  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:35:27.692471  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:35:27.731460  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:35:27.731495  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:35:30.263363  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:30.273852  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:30.273921  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:30.300445  536580 cri.go:89] found id: ""
	I1213 11:35:30.300468  536580 logs.go:282] 0 containers: []
	W1213 11:35:30.300477  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:30.300483  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:30.300542  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:30.333040  536580 cri.go:89] found id: ""
	I1213 11:35:30.333063  536580 logs.go:282] 0 containers: []
	W1213 11:35:30.333071  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:35:30.333083  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:30.333140  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:35:30.359848  536580 cri.go:89] found id: ""
	I1213 11:35:30.359871  536580 logs.go:282] 0 containers: []
	W1213 11:35:30.359881  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:35:30.359889  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:35:30.359948  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:35:30.386570  536580 cri.go:89] found id: ""
	I1213 11:35:30.386592  536580 logs.go:282] 0 containers: []
	W1213 11:35:30.386600  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:35:30.386607  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:35:30.386663  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:35:30.413596  536580 cri.go:89] found id: ""
	I1213 11:35:30.413623  536580 logs.go:282] 0 containers: []
	W1213 11:35:30.413632  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:35:30.413638  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:35:30.413695  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:35:30.440759  536580 cri.go:89] found id: ""
	I1213 11:35:30.440784  536580 logs.go:282] 0 containers: []
	W1213 11:35:30.440793  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:35:30.440800  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:35:30.440861  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:35:30.466066  536580 cri.go:89] found id: ""
	I1213 11:35:30.466088  536580 logs.go:282] 0 containers: []
	W1213 11:35:30.466097  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:35:30.466103  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:35:30.466157  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:35:30.493095  536580 cri.go:89] found id: ""
	I1213 11:35:30.493118  536580 logs.go:282] 0 containers: []
	W1213 11:35:30.493128  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:35:30.493138  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:35:30.493151  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:35:30.522771  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:35:30.522799  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:35:30.594746  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:35:30.594784  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:35:30.613590  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:35:30.613618  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:35:30.677223  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:35:30.677300  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:35:30.677328  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:35:33.208736  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:33.218593  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:33.218661  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:33.244790  536580 cri.go:89] found id: ""
	I1213 11:35:33.244814  536580 logs.go:282] 0 containers: []
	W1213 11:35:33.244823  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:33.244831  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:33.244887  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:33.269795  536580 cri.go:89] found id: ""
	I1213 11:35:33.269819  536580 logs.go:282] 0 containers: []
	W1213 11:35:33.269828  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:35:33.269835  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:33.269895  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:35:33.294442  536580 cri.go:89] found id: ""
	I1213 11:35:33.294467  536580 logs.go:282] 0 containers: []
	W1213 11:35:33.294476  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:35:33.294482  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:35:33.294540  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:35:33.322724  536580 cri.go:89] found id: ""
	I1213 11:35:33.322747  536580 logs.go:282] 0 containers: []
	W1213 11:35:33.322755  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:35:33.322762  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:35:33.322818  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:35:33.348768  536580 cri.go:89] found id: ""
	I1213 11:35:33.348805  536580 logs.go:282] 0 containers: []
	W1213 11:35:33.348815  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:35:33.348822  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:35:33.348880  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:35:33.373270  536580 cri.go:89] found id: ""
	I1213 11:35:33.373292  536580 logs.go:282] 0 containers: []
	W1213 11:35:33.373302  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:35:33.373308  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:35:33.373362  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:35:33.398479  536580 cri.go:89] found id: ""
	I1213 11:35:33.398502  536580 logs.go:282] 0 containers: []
	W1213 11:35:33.398510  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:35:33.398516  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:35:33.398575  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:35:33.424734  536580 cri.go:89] found id: ""
	I1213 11:35:33.424757  536580 logs.go:282] 0 containers: []
	W1213 11:35:33.424765  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:35:33.424774  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:35:33.424786  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:35:33.491418  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:35:33.491456  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:35:33.507407  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:35:33.507435  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:35:33.572391  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:35:33.572417  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:35:33.572443  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:35:33.603821  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:35:33.603860  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:35:36.138316  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:36.148207  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:36.148274  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:36.173819  536580 cri.go:89] found id: ""
	I1213 11:35:36.173844  536580 logs.go:282] 0 containers: []
	W1213 11:35:36.173853  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:36.173860  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:36.173916  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:36.200621  536580 cri.go:89] found id: ""
	I1213 11:35:36.200644  536580 logs.go:282] 0 containers: []
	W1213 11:35:36.200653  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:35:36.200659  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:36.200714  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:35:36.225724  536580 cri.go:89] found id: ""
	I1213 11:35:36.225746  536580 logs.go:282] 0 containers: []
	W1213 11:35:36.225755  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:35:36.225761  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:35:36.225830  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:35:36.250741  536580 cri.go:89] found id: ""
	I1213 11:35:36.250764  536580 logs.go:282] 0 containers: []
	W1213 11:35:36.250773  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:35:36.250780  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:35:36.250838  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:35:36.280598  536580 cri.go:89] found id: ""
	I1213 11:35:36.280621  536580 logs.go:282] 0 containers: []
	W1213 11:35:36.280630  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:35:36.280636  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:35:36.280696  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:35:36.306932  536580 cri.go:89] found id: ""
	I1213 11:35:36.306955  536580 logs.go:282] 0 containers: []
	W1213 11:35:36.306964  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:35:36.306971  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:35:36.307029  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:35:36.339002  536580 cri.go:89] found id: ""
	I1213 11:35:36.339025  536580 logs.go:282] 0 containers: []
	W1213 11:35:36.339034  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:35:36.339046  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:35:36.339105  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:35:36.365280  536580 cri.go:89] found id: ""
	I1213 11:35:36.365303  536580 logs.go:282] 0 containers: []
	W1213 11:35:36.365312  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:35:36.365321  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:35:36.365332  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:35:36.395719  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:35:36.395786  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:35:36.462533  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:35:36.462569  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:35:36.478927  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:35:36.478969  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:35:36.549364  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:35:36.549386  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:35:36.549399  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:35:39.080643  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:39.090669  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:39.090744  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:39.119308  536580 cri.go:89] found id: ""
	I1213 11:35:39.119330  536580 logs.go:282] 0 containers: []
	W1213 11:35:39.119339  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:39.119347  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:39.119401  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:39.145277  536580 cri.go:89] found id: ""
	I1213 11:35:39.145299  536580 logs.go:282] 0 containers: []
	W1213 11:35:39.145307  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:35:39.145315  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:39.145369  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:35:39.169633  536580 cri.go:89] found id: ""
	I1213 11:35:39.169655  536580 logs.go:282] 0 containers: []
	W1213 11:35:39.169664  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:35:39.169670  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:35:39.169723  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:35:39.194019  536580 cri.go:89] found id: ""
	I1213 11:35:39.194040  536580 logs.go:282] 0 containers: []
	W1213 11:35:39.194049  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:35:39.194055  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:35:39.194117  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:35:39.218400  536580 cri.go:89] found id: ""
	I1213 11:35:39.218422  536580 logs.go:282] 0 containers: []
	W1213 11:35:39.218430  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:35:39.218436  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:35:39.218492  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:35:39.244091  536580 cri.go:89] found id: ""
	I1213 11:35:39.244118  536580 logs.go:282] 0 containers: []
	W1213 11:35:39.244127  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:35:39.244135  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:35:39.244190  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:35:39.269500  536580 cri.go:89] found id: ""
	I1213 11:35:39.269524  536580 logs.go:282] 0 containers: []
	W1213 11:35:39.269533  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:35:39.269540  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:35:39.269595  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:35:39.299615  536580 cri.go:89] found id: ""
	I1213 11:35:39.299637  536580 logs.go:282] 0 containers: []
	W1213 11:35:39.299646  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:35:39.299655  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:35:39.299667  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:35:39.367064  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:35:39.367101  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:35:39.383325  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:35:39.383357  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:35:39.451060  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:35:39.451084  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:35:39.451097  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:35:39.481484  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:35:39.481521  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:35:42.011679  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:42.029062  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:42.029135  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:42.106456  536580 cri.go:89] found id: ""
	I1213 11:35:42.106496  536580 logs.go:282] 0 containers: []
	W1213 11:35:42.106507  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:42.106515  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:42.106583  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:42.160091  536580 cri.go:89] found id: ""
	I1213 11:35:42.160148  536580 logs.go:282] 0 containers: []
	W1213 11:35:42.160160  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:35:42.160169  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:42.160245  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:35:42.201214  536580 cri.go:89] found id: ""
	I1213 11:35:42.201246  536580 logs.go:282] 0 containers: []
	W1213 11:35:42.201257  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:35:42.201264  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:35:42.201345  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:35:42.231582  536580 cri.go:89] found id: ""
	I1213 11:35:42.231611  536580 logs.go:282] 0 containers: []
	W1213 11:35:42.231620  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:35:42.231627  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:35:42.231691  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:35:42.261381  536580 cri.go:89] found id: ""
	I1213 11:35:42.261409  536580 logs.go:282] 0 containers: []
	W1213 11:35:42.261420  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:35:42.261428  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:35:42.261491  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:35:42.291205  536580 cri.go:89] found id: ""
	I1213 11:35:42.291232  536580 logs.go:282] 0 containers: []
	W1213 11:35:42.291242  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:35:42.291254  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:35:42.291332  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:35:42.325085  536580 cri.go:89] found id: ""
	I1213 11:35:42.325111  536580 logs.go:282] 0 containers: []
	W1213 11:35:42.325121  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:35:42.325127  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:35:42.325186  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:35:42.352896  536580 cri.go:89] found id: ""
	I1213 11:35:42.352922  536580 logs.go:282] 0 containers: []
	W1213 11:35:42.352931  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:35:42.352940  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:35:42.352952  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:35:42.425106  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:35:42.425145  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:35:42.441289  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:35:42.441316  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:35:42.512290  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:35:42.512398  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:35:42.512427  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:35:42.542725  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:35:42.542763  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:35:45.071976  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:45.086992  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:45.087082  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:45.135830  536580 cri.go:89] found id: ""
	I1213 11:35:45.135856  536580 logs.go:282] 0 containers: []
	W1213 11:35:45.135878  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:45.135886  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:45.135953  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:45.172579  536580 cri.go:89] found id: ""
	I1213 11:35:45.172629  536580 logs.go:282] 0 containers: []
	W1213 11:35:45.172642  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:35:45.172650  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:45.172731  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:35:45.212809  536580 cri.go:89] found id: ""
	I1213 11:35:45.212856  536580 logs.go:282] 0 containers: []
	W1213 11:35:45.212868  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:35:45.212876  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:35:45.213000  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:35:45.249301  536580 cri.go:89] found id: ""
	I1213 11:35:45.249370  536580 logs.go:282] 0 containers: []
	W1213 11:35:45.249393  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:35:45.249402  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:35:45.249467  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:35:45.283094  536580 cri.go:89] found id: ""
	I1213 11:35:45.283120  536580 logs.go:282] 0 containers: []
	W1213 11:35:45.283130  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:35:45.283137  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:35:45.283198  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:35:45.313978  536580 cri.go:89] found id: ""
	I1213 11:35:45.314002  536580 logs.go:282] 0 containers: []
	W1213 11:35:45.314011  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:35:45.314019  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:35:45.314076  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:35:45.340720  536580 cri.go:89] found id: ""
	I1213 11:35:45.340742  536580 logs.go:282] 0 containers: []
	W1213 11:35:45.340764  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:35:45.340771  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:35:45.340848  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:35:45.366077  536580 cri.go:89] found id: ""
	I1213 11:35:45.366104  536580 logs.go:282] 0 containers: []
	W1213 11:35:45.366113  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:35:45.366123  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:35:45.366136  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:35:45.400731  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:35:45.400766  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:35:45.429336  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:35:45.429412  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:35:45.500409  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:35:45.500444  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:35:45.518049  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:35:45.518078  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:35:45.583367  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:35:48.083727  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:48.095316  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:48.095393  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:48.123329  536580 cri.go:89] found id: ""
	I1213 11:35:48.123354  536580 logs.go:282] 0 containers: []
	W1213 11:35:48.123376  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:48.123383  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:48.123469  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:48.149345  536580 cri.go:89] found id: ""
	I1213 11:35:48.149371  536580 logs.go:282] 0 containers: []
	W1213 11:35:48.149397  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:35:48.149404  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:48.149467  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:35:48.174994  536580 cri.go:89] found id: ""
	I1213 11:35:48.175026  536580 logs.go:282] 0 containers: []
	W1213 11:35:48.175035  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:35:48.175041  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:35:48.175120  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:35:48.200838  536580 cri.go:89] found id: ""
	I1213 11:35:48.200860  536580 logs.go:282] 0 containers: []
	W1213 11:35:48.200869  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:35:48.200875  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:35:48.200933  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:35:48.233229  536580 cri.go:89] found id: ""
	I1213 11:35:48.233261  536580 logs.go:282] 0 containers: []
	W1213 11:35:48.233270  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:35:48.233277  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:35:48.233346  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:35:48.258705  536580 cri.go:89] found id: ""
	I1213 11:35:48.258733  536580 logs.go:282] 0 containers: []
	W1213 11:35:48.258750  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:35:48.258757  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:35:48.258826  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:35:48.288858  536580 cri.go:89] found id: ""
	I1213 11:35:48.288902  536580 logs.go:282] 0 containers: []
	W1213 11:35:48.288913  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:35:48.288920  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:35:48.289034  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:35:48.318723  536580 cri.go:89] found id: ""
	I1213 11:35:48.318749  536580 logs.go:282] 0 containers: []
	W1213 11:35:48.318757  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:35:48.318766  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:35:48.318781  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:35:48.336843  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:35:48.336874  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:35:48.410207  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:35:48.410274  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:35:48.410301  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:35:48.442536  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:35:48.442574  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:35:48.475057  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:35:48.475086  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:35:51.045812  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:51.056974  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:51.057039  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:51.092573  536580 cri.go:89] found id: ""
	I1213 11:35:51.092601  536580 logs.go:282] 0 containers: []
	W1213 11:35:51.092610  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:51.092618  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:51.092679  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:51.118228  536580 cri.go:89] found id: ""
	I1213 11:35:51.118252  536580 logs.go:282] 0 containers: []
	W1213 11:35:51.118261  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:35:51.118274  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:51.118330  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:35:51.148855  536580 cri.go:89] found id: ""
	I1213 11:35:51.148892  536580 logs.go:282] 0 containers: []
	W1213 11:35:51.148906  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:35:51.148912  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:35:51.148973  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:35:51.173827  536580 cri.go:89] found id: ""
	I1213 11:35:51.173851  536580 logs.go:282] 0 containers: []
	W1213 11:35:51.173860  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:35:51.173866  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:35:51.173926  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:35:51.202122  536580 cri.go:89] found id: ""
	I1213 11:35:51.202144  536580 logs.go:282] 0 containers: []
	W1213 11:35:51.202154  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:35:51.202160  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:35:51.202221  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:35:51.228934  536580 cri.go:89] found id: ""
	I1213 11:35:51.228957  536580 logs.go:282] 0 containers: []
	W1213 11:35:51.228966  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:35:51.228973  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:35:51.229030  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:35:51.254396  536580 cri.go:89] found id: ""
	I1213 11:35:51.254418  536580 logs.go:282] 0 containers: []
	W1213 11:35:51.254427  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:35:51.254437  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:35:51.254495  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:35:51.279857  536580 cri.go:89] found id: ""
	I1213 11:35:51.279880  536580 logs.go:282] 0 containers: []
	W1213 11:35:51.279888  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:35:51.279898  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:35:51.279911  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:35:51.295824  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:35:51.295855  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:35:51.365800  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:35:51.365821  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:35:51.365833  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:35:51.400958  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:35:51.400996  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:35:51.434667  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:35:51.434692  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:35:54.000976  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:54.014121  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:54.014193  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:54.046207  536580 cri.go:89] found id: ""
	I1213 11:35:54.046236  536580 logs.go:282] 0 containers: []
	W1213 11:35:54.046245  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:54.046253  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:54.046312  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:54.082097  536580 cri.go:89] found id: ""
	I1213 11:35:54.082167  536580 logs.go:282] 0 containers: []
	W1213 11:35:54.082192  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:35:54.082212  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:54.082302  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:35:54.120749  536580 cri.go:89] found id: ""
	I1213 11:35:54.120772  536580 logs.go:282] 0 containers: []
	W1213 11:35:54.120782  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:35:54.120788  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:35:54.120845  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:35:54.147004  536580 cri.go:89] found id: ""
	I1213 11:35:54.147025  536580 logs.go:282] 0 containers: []
	W1213 11:35:54.147034  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:35:54.147041  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:35:54.147095  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:35:54.174425  536580 cri.go:89] found id: ""
	I1213 11:35:54.174453  536580 logs.go:282] 0 containers: []
	W1213 11:35:54.174463  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:35:54.174469  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:35:54.174525  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:35:54.199449  536580 cri.go:89] found id: ""
	I1213 11:35:54.199477  536580 logs.go:282] 0 containers: []
	W1213 11:35:54.199486  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:35:54.199493  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:35:54.199572  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:35:54.224494  536580 cri.go:89] found id: ""
	I1213 11:35:54.224516  536580 logs.go:282] 0 containers: []
	W1213 11:35:54.224524  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:35:54.224531  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:35:54.224598  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:35:54.250244  536580 cri.go:89] found id: ""
	I1213 11:35:54.250267  536580 logs.go:282] 0 containers: []
	W1213 11:35:54.250276  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:35:54.250286  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:35:54.250298  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:35:54.316403  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:35:54.316438  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:35:54.332993  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:35:54.333022  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:35:54.397475  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:35:54.397495  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:35:54.397509  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:35:54.428592  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:35:54.428627  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:35:56.957749  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:56.967700  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:56.967767  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:56.994877  536580 cri.go:89] found id: ""
	I1213 11:35:56.994900  536580 logs.go:282] 0 containers: []
	W1213 11:35:56.994909  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:56.994915  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:56.994984  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:57.030085  536580 cri.go:89] found id: ""
	I1213 11:35:57.030166  536580 logs.go:282] 0 containers: []
	W1213 11:35:57.030193  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:35:57.030228  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:57.030324  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:35:57.064596  536580 cri.go:89] found id: ""
	I1213 11:35:57.064617  536580 logs.go:282] 0 containers: []
	W1213 11:35:57.064626  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:35:57.064632  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:35:57.064686  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:35:57.091083  536580 cri.go:89] found id: ""
	I1213 11:35:57.091105  536580 logs.go:282] 0 containers: []
	W1213 11:35:57.091113  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:35:57.091119  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:35:57.091173  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:35:57.122252  536580 cri.go:89] found id: ""
	I1213 11:35:57.122273  536580 logs.go:282] 0 containers: []
	W1213 11:35:57.122282  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:35:57.122288  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:35:57.122348  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:35:57.148756  536580 cri.go:89] found id: ""
	I1213 11:35:57.148780  536580 logs.go:282] 0 containers: []
	W1213 11:35:57.148789  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:35:57.148795  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:35:57.148852  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:35:57.175134  536580 cri.go:89] found id: ""
	I1213 11:35:57.175156  536580 logs.go:282] 0 containers: []
	W1213 11:35:57.175165  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:35:57.175171  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:35:57.175224  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:35:57.205998  536580 cri.go:89] found id: ""
	I1213 11:35:57.206070  536580 logs.go:282] 0 containers: []
	W1213 11:35:57.206093  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:35:57.206116  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:35:57.206156  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:35:57.272699  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:35:57.272737  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:35:57.289751  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:35:57.289779  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:35:57.360607  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:35:57.360684  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:35:57.360712  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:35:57.392517  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:35:57.392554  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:35:59.923946  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:35:59.935867  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:35:59.935988  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:35:59.962110  536580 cri.go:89] found id: ""
	I1213 11:35:59.962138  536580 logs.go:282] 0 containers: []
	W1213 11:35:59.962147  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:35:59.962154  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:35:59.962209  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:35:59.988030  536580 cri.go:89] found id: ""
	I1213 11:35:59.988054  536580 logs.go:282] 0 containers: []
	W1213 11:35:59.988064  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:35:59.988070  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:35:59.988126  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:36:00.131401  536580 cri.go:89] found id: ""
	I1213 11:36:00.131428  536580 logs.go:282] 0 containers: []
	W1213 11:36:00.131438  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:36:00.131446  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:36:00.131636  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:36:00.287500  536580 cri.go:89] found id: ""
	I1213 11:36:00.287611  536580 logs.go:282] 0 containers: []
	W1213 11:36:00.287636  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:36:00.287673  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:36:00.287783  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:36:00.347987  536580 cri.go:89] found id: ""
	I1213 11:36:00.348012  536580 logs.go:282] 0 containers: []
	W1213 11:36:00.348021  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:36:00.348029  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:36:00.348114  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:36:00.401680  536580 cri.go:89] found id: ""
	I1213 11:36:00.401709  536580 logs.go:282] 0 containers: []
	W1213 11:36:00.401721  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:36:00.401730  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:36:00.401796  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:36:00.452869  536580 cri.go:89] found id: ""
	I1213 11:36:00.453049  536580 logs.go:282] 0 containers: []
	W1213 11:36:00.453061  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:36:00.453068  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:36:00.453139  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:36:00.512388  536580 cri.go:89] found id: ""
	I1213 11:36:00.512420  536580 logs.go:282] 0 containers: []
	W1213 11:36:00.512429  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:36:00.512442  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:36:00.512458  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:36:00.607673  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:36:00.607766  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:36:00.624855  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:36:00.624934  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:36:00.727791  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:36:00.727819  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:36:00.727835  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:36:00.769955  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:36:00.770048  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:36:03.312513  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:03.324249  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:36:03.324328  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:36:03.354186  536580 cri.go:89] found id: ""
	I1213 11:36:03.354213  536580 logs.go:282] 0 containers: []
	W1213 11:36:03.354224  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:36:03.354232  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:36:03.354290  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:36:03.392941  536580 cri.go:89] found id: ""
	I1213 11:36:03.392981  536580 logs.go:282] 0 containers: []
	W1213 11:36:03.392991  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:36:03.393016  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:36:03.393134  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:36:03.446830  536580 cri.go:89] found id: ""
	I1213 11:36:03.446852  536580 logs.go:282] 0 containers: []
	W1213 11:36:03.446861  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:36:03.446868  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:36:03.446922  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:36:03.478332  536580 cri.go:89] found id: ""
	I1213 11:36:03.478355  536580 logs.go:282] 0 containers: []
	W1213 11:36:03.478363  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:36:03.478370  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:36:03.478427  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:36:03.526179  536580 cri.go:89] found id: ""
	I1213 11:36:03.526200  536580 logs.go:282] 0 containers: []
	W1213 11:36:03.526208  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:36:03.526215  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:36:03.526271  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:36:03.561963  536580 cri.go:89] found id: ""
	I1213 11:36:03.562040  536580 logs.go:282] 0 containers: []
	W1213 11:36:03.562065  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:36:03.562085  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:36:03.562169  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:36:03.601918  536580 cri.go:89] found id: ""
	I1213 11:36:03.601993  536580 logs.go:282] 0 containers: []
	W1213 11:36:03.602017  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:36:03.602036  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:36:03.602126  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:36:03.664796  536580 cri.go:89] found id: ""
	I1213 11:36:03.664872  536580 logs.go:282] 0 containers: []
	W1213 11:36:03.664895  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:36:03.664917  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:36:03.664957  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:36:03.749924  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:36:03.749966  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:36:03.766368  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:36:03.766395  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:36:03.846421  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:36:03.846444  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:36:03.846463  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:36:03.888025  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:36:03.888070  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:36:06.438854  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:06.449110  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:36:06.449179  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:36:06.473586  536580 cri.go:89] found id: ""
	I1213 11:36:06.473655  536580 logs.go:282] 0 containers: []
	W1213 11:36:06.473674  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:36:06.473683  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:36:06.473743  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:36:06.500753  536580 cri.go:89] found id: ""
	I1213 11:36:06.500776  536580 logs.go:282] 0 containers: []
	W1213 11:36:06.500785  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:36:06.500791  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:36:06.500850  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:36:06.528284  536580 cri.go:89] found id: ""
	I1213 11:36:06.528308  536580 logs.go:282] 0 containers: []
	W1213 11:36:06.528317  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:36:06.528338  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:36:06.528396  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:36:06.554762  536580 cri.go:89] found id: ""
	I1213 11:36:06.554788  536580 logs.go:282] 0 containers: []
	W1213 11:36:06.554797  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:36:06.554805  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:36:06.554863  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:36:06.580432  536580 cri.go:89] found id: ""
	I1213 11:36:06.580461  536580 logs.go:282] 0 containers: []
	W1213 11:36:06.580471  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:36:06.580477  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:36:06.580534  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:36:06.607490  536580 cri.go:89] found id: ""
	I1213 11:36:06.607535  536580 logs.go:282] 0 containers: []
	W1213 11:36:06.607545  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:36:06.607552  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:36:06.607610  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:36:06.634383  536580 cri.go:89] found id: ""
	I1213 11:36:06.634408  536580 logs.go:282] 0 containers: []
	W1213 11:36:06.634417  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:36:06.634423  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:36:06.634480  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:36:06.661365  536580 cri.go:89] found id: ""
	I1213 11:36:06.661437  536580 logs.go:282] 0 containers: []
	W1213 11:36:06.661451  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:36:06.661462  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:36:06.661474  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:36:06.694972  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:36:06.694998  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:36:06.765593  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:36:06.765633  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:36:06.783082  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:36:06.783156  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:36:06.851252  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:36:06.851322  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:36:06.851350  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:36:09.381889  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:09.393154  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:36:09.393223  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:36:09.419393  536580 cri.go:89] found id: ""
	I1213 11:36:09.419422  536580 logs.go:282] 0 containers: []
	W1213 11:36:09.419432  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:36:09.419439  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:36:09.419500  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:36:09.451507  536580 cri.go:89] found id: ""
	I1213 11:36:09.451555  536580 logs.go:282] 0 containers: []
	W1213 11:36:09.451564  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:36:09.451571  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:36:09.451635  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:36:09.476751  536580 cri.go:89] found id: ""
	I1213 11:36:09.476775  536580 logs.go:282] 0 containers: []
	W1213 11:36:09.476783  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:36:09.476795  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:36:09.476851  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:36:09.503991  536580 cri.go:89] found id: ""
	I1213 11:36:09.504014  536580 logs.go:282] 0 containers: []
	W1213 11:36:09.504021  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:36:09.504028  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:36:09.504086  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:36:09.530342  536580 cri.go:89] found id: ""
	I1213 11:36:09.530365  536580 logs.go:282] 0 containers: []
	W1213 11:36:09.530374  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:36:09.530380  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:36:09.530480  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:36:09.556160  536580 cri.go:89] found id: ""
	I1213 11:36:09.556186  536580 logs.go:282] 0 containers: []
	W1213 11:36:09.556195  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:36:09.556202  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:36:09.556262  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:36:09.581433  536580 cri.go:89] found id: ""
	I1213 11:36:09.581459  536580 logs.go:282] 0 containers: []
	W1213 11:36:09.581468  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:36:09.581474  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:36:09.581528  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:36:09.605964  536580 cri.go:89] found id: ""
	I1213 11:36:09.605989  536580 logs.go:282] 0 containers: []
	W1213 11:36:09.605998  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:36:09.606008  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:36:09.606020  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:36:09.672459  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:36:09.672495  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:36:09.688720  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:36:09.688747  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:36:09.760689  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:36:09.760709  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:36:09.760721  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:36:09.794594  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:36:09.794638  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:36:12.327651  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:12.338062  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:36:12.338136  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:36:12.364220  536580 cri.go:89] found id: ""
	I1213 11:36:12.364244  536580 logs.go:282] 0 containers: []
	W1213 11:36:12.364253  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:36:12.364260  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:36:12.364325  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:36:12.397215  536580 cri.go:89] found id: ""
	I1213 11:36:12.397243  536580 logs.go:282] 0 containers: []
	W1213 11:36:12.397256  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:36:12.397263  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:36:12.397335  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:36:12.423926  536580 cri.go:89] found id: ""
	I1213 11:36:12.423950  536580 logs.go:282] 0 containers: []
	W1213 11:36:12.423960  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:36:12.423966  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:36:12.424026  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:36:12.450076  536580 cri.go:89] found id: ""
	I1213 11:36:12.450105  536580 logs.go:282] 0 containers: []
	W1213 11:36:12.450119  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:36:12.450126  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:36:12.450187  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:36:12.475295  536580 cri.go:89] found id: ""
	I1213 11:36:12.475323  536580 logs.go:282] 0 containers: []
	W1213 11:36:12.475334  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:36:12.475341  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:36:12.475396  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:36:12.502227  536580 cri.go:89] found id: ""
	I1213 11:36:12.502252  536580 logs.go:282] 0 containers: []
	W1213 11:36:12.502261  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:36:12.502268  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:36:12.502326  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:36:12.531427  536580 cri.go:89] found id: ""
	I1213 11:36:12.531453  536580 logs.go:282] 0 containers: []
	W1213 11:36:12.531463  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:36:12.531469  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:36:12.531549  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:36:12.556449  536580 cri.go:89] found id: ""
	I1213 11:36:12.556471  536580 logs.go:282] 0 containers: []
	W1213 11:36:12.556480  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:36:12.556489  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:36:12.556501  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:36:12.621112  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:36:12.621132  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:36:12.621144  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:36:12.651311  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:36:12.651346  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:36:12.682147  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:36:12.682175  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:36:12.748424  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:36:12.748458  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:36:15.265302  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:15.285396  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:36:15.285460  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:36:15.322724  536580 cri.go:89] found id: ""
	I1213 11:36:15.322745  536580 logs.go:282] 0 containers: []
	W1213 11:36:15.322753  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:36:15.322759  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:36:15.322815  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:36:15.366595  536580 cri.go:89] found id: ""
	I1213 11:36:15.366616  536580 logs.go:282] 0 containers: []
	W1213 11:36:15.366624  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:36:15.366630  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:36:15.366685  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:36:15.408721  536580 cri.go:89] found id: ""
	I1213 11:36:15.408742  536580 logs.go:282] 0 containers: []
	W1213 11:36:15.408750  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:36:15.408756  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:36:15.408814  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:36:15.438764  536580 cri.go:89] found id: ""
	I1213 11:36:15.438835  536580 logs.go:282] 0 containers: []
	W1213 11:36:15.438858  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:36:15.438877  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:36:15.438967  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:36:15.487701  536580 cri.go:89] found id: ""
	I1213 11:36:15.487778  536580 logs.go:282] 0 containers: []
	W1213 11:36:15.487801  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:36:15.487821  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:36:15.487927  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:36:15.522138  536580 cri.go:89] found id: ""
	I1213 11:36:15.522211  536580 logs.go:282] 0 containers: []
	W1213 11:36:15.522236  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:36:15.522255  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:36:15.522340  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:36:15.566138  536580 cri.go:89] found id: ""
	I1213 11:36:15.566214  536580 logs.go:282] 0 containers: []
	W1213 11:36:15.566239  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:36:15.566259  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:36:15.566350  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:36:15.605904  536580 cri.go:89] found id: ""
	I1213 11:36:15.605927  536580 logs.go:282] 0 containers: []
	W1213 11:36:15.605936  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:36:15.605945  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:36:15.605956  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:36:15.640149  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:36:15.640188  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:36:15.685591  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:36:15.685623  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:36:15.753639  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:36:15.753685  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:36:15.770480  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:36:15.770510  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:36:15.836739  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:36:18.336961  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:18.348442  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:36:18.348510  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:36:18.389257  536580 cri.go:89] found id: ""
	I1213 11:36:18.389284  536580 logs.go:282] 0 containers: []
	W1213 11:36:18.389293  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:36:18.389300  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:36:18.389356  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:36:18.426596  536580 cri.go:89] found id: ""
	I1213 11:36:18.426621  536580 logs.go:282] 0 containers: []
	W1213 11:36:18.426630  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:36:18.426637  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:36:18.426688  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:36:18.460740  536580 cri.go:89] found id: ""
	I1213 11:36:18.460772  536580 logs.go:282] 0 containers: []
	W1213 11:36:18.460781  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:36:18.460787  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:36:18.460885  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:36:18.500301  536580 cri.go:89] found id: ""
	I1213 11:36:18.500343  536580 logs.go:282] 0 containers: []
	W1213 11:36:18.500362  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:36:18.500383  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:36:18.500447  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:36:18.556096  536580 cri.go:89] found id: ""
	I1213 11:36:18.556129  536580 logs.go:282] 0 containers: []
	W1213 11:36:18.556139  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:36:18.556146  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:36:18.556231  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:36:18.598753  536580 cri.go:89] found id: ""
	I1213 11:36:18.598787  536580 logs.go:282] 0 containers: []
	W1213 11:36:18.598796  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:36:18.598804  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:36:18.598864  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:36:18.649297  536580 cri.go:89] found id: ""
	I1213 11:36:18.649331  536580 logs.go:282] 0 containers: []
	W1213 11:36:18.649341  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:36:18.649348  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:36:18.649413  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:36:18.691229  536580 cri.go:89] found id: ""
	I1213 11:36:18.691252  536580 logs.go:282] 0 containers: []
	W1213 11:36:18.691261  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:36:18.691270  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:36:18.691293  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:36:18.783260  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:36:18.783314  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:36:18.804889  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:36:18.804924  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:36:18.893980  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:36:18.894011  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:36:18.894024  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:36:18.933399  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:36:18.933440  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:36:21.468347  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:21.478452  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:36:21.478522  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:36:21.504650  536580 cri.go:89] found id: ""
	I1213 11:36:21.504674  536580 logs.go:282] 0 containers: []
	W1213 11:36:21.504683  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:36:21.504689  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:36:21.504747  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:36:21.530323  536580 cri.go:89] found id: ""
	I1213 11:36:21.530347  536580 logs.go:282] 0 containers: []
	W1213 11:36:21.530356  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:36:21.530363  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:36:21.530421  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:36:21.557316  536580 cri.go:89] found id: ""
	I1213 11:36:21.557346  536580 logs.go:282] 0 containers: []
	W1213 11:36:21.557356  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:36:21.557363  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:36:21.557423  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:36:21.584144  536580 cri.go:89] found id: ""
	I1213 11:36:21.584173  536580 logs.go:282] 0 containers: []
	W1213 11:36:21.584182  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:36:21.584189  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:36:21.584246  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:36:21.610916  536580 cri.go:89] found id: ""
	I1213 11:36:21.610940  536580 logs.go:282] 0 containers: []
	W1213 11:36:21.610949  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:36:21.610955  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:36:21.611022  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:36:21.637668  536580 cri.go:89] found id: ""
	I1213 11:36:21.637691  536580 logs.go:282] 0 containers: []
	W1213 11:36:21.637700  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:36:21.637706  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:36:21.637765  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:36:21.665123  536580 cri.go:89] found id: ""
	I1213 11:36:21.665146  536580 logs.go:282] 0 containers: []
	W1213 11:36:21.665155  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:36:21.665162  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:36:21.665217  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:36:21.690014  536580 cri.go:89] found id: ""
	I1213 11:36:21.690036  536580 logs.go:282] 0 containers: []
	W1213 11:36:21.690044  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:36:21.690053  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:36:21.690065  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:36:21.763919  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:36:21.763942  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:36:21.763955  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:36:21.814197  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:36:21.814285  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:36:21.874061  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:36:21.874090  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:36:21.972934  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:36:21.972965  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:36:24.503298  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:24.513593  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:36:24.513669  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:36:24.538094  536580 cri.go:89] found id: ""
	I1213 11:36:24.538119  536580 logs.go:282] 0 containers: []
	W1213 11:36:24.538129  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:36:24.538136  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:36:24.538192  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:36:24.563210  536580 cri.go:89] found id: ""
	I1213 11:36:24.563239  536580 logs.go:282] 0 containers: []
	W1213 11:36:24.563248  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:36:24.563261  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:36:24.563322  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:36:24.588220  536580 cri.go:89] found id: ""
	I1213 11:36:24.588244  536580 logs.go:282] 0 containers: []
	W1213 11:36:24.588253  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:36:24.588259  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:36:24.588324  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:36:24.617655  536580 cri.go:89] found id: ""
	I1213 11:36:24.617684  536580 logs.go:282] 0 containers: []
	W1213 11:36:24.617694  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:36:24.617700  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:36:24.617756  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:36:24.644002  536580 cri.go:89] found id: ""
	I1213 11:36:24.644030  536580 logs.go:282] 0 containers: []
	W1213 11:36:24.644038  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:36:24.644045  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:36:24.644102  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:36:24.670278  536580 cri.go:89] found id: ""
	I1213 11:36:24.670305  536580 logs.go:282] 0 containers: []
	W1213 11:36:24.670314  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:36:24.670321  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:36:24.670377  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:36:24.696561  536580 cri.go:89] found id: ""
	I1213 11:36:24.696583  536580 logs.go:282] 0 containers: []
	W1213 11:36:24.696592  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:36:24.696599  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:36:24.696657  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:36:24.721372  536580 cri.go:89] found id: ""
	I1213 11:36:24.721451  536580 logs.go:282] 0 containers: []
	W1213 11:36:24.721476  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:36:24.721493  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:36:24.721518  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:36:24.738162  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:36:24.738191  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:36:24.798595  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:36:24.798617  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:36:24.798630  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:36:24.829712  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:36:24.829748  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:36:24.858477  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:36:24.858506  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:36:27.425194  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:27.435775  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:36:27.435845  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:36:27.467137  536580 cri.go:89] found id: ""
	I1213 11:36:27.467164  536580 logs.go:282] 0 containers: []
	W1213 11:36:27.467174  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:36:27.467182  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:36:27.467287  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:36:27.506539  536580 cri.go:89] found id: ""
	I1213 11:36:27.506564  536580 logs.go:282] 0 containers: []
	W1213 11:36:27.506574  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:36:27.506580  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:36:27.506647  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:36:27.533521  536580 cri.go:89] found id: ""
	I1213 11:36:27.533547  536580 logs.go:282] 0 containers: []
	W1213 11:36:27.533556  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:36:27.533563  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:36:27.533652  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:36:27.563249  536580 cri.go:89] found id: ""
	I1213 11:36:27.563273  536580 logs.go:282] 0 containers: []
	W1213 11:36:27.563282  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:36:27.563304  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:36:27.563361  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:36:27.590271  536580 cri.go:89] found id: ""
	I1213 11:36:27.590296  536580 logs.go:282] 0 containers: []
	W1213 11:36:27.590305  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:36:27.590319  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:36:27.590375  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:36:27.615825  536580 cri.go:89] found id: ""
	I1213 11:36:27.615853  536580 logs.go:282] 0 containers: []
	W1213 11:36:27.615863  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:36:27.615870  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:36:27.615926  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:36:27.641297  536580 cri.go:89] found id: ""
	I1213 11:36:27.641320  536580 logs.go:282] 0 containers: []
	W1213 11:36:27.641328  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:36:27.641335  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:36:27.641396  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:36:27.671187  536580 cri.go:89] found id: ""
	I1213 11:36:27.671208  536580 logs.go:282] 0 containers: []
	W1213 11:36:27.671217  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:36:27.671227  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:36:27.671238  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:36:27.740515  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:36:27.740553  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:36:27.756510  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:36:27.756537  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:36:27.825232  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:36:27.825252  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:36:27.825264  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:36:27.855849  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:36:27.855882  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:36:30.387093  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:30.397202  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:36:30.397267  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:36:30.426742  536580 cri.go:89] found id: ""
	I1213 11:36:30.426765  536580 logs.go:282] 0 containers: []
	W1213 11:36:30.426774  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:36:30.426780  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:36:30.426839  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:36:30.451991  536580 cri.go:89] found id: ""
	I1213 11:36:30.452016  536580 logs.go:282] 0 containers: []
	W1213 11:36:30.452025  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:36:30.452032  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:36:30.452090  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:36:30.477990  536580 cri.go:89] found id: ""
	I1213 11:36:30.478015  536580 logs.go:282] 0 containers: []
	W1213 11:36:30.478024  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:36:30.478031  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:36:30.478090  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:36:30.504887  536580 cri.go:89] found id: ""
	I1213 11:36:30.504911  536580 logs.go:282] 0 containers: []
	W1213 11:36:30.504921  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:36:30.504928  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:36:30.504986  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:36:30.530883  536580 cri.go:89] found id: ""
	I1213 11:36:30.530910  536580 logs.go:282] 0 containers: []
	W1213 11:36:30.530919  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:36:30.530926  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:36:30.530989  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:36:30.556661  536580 cri.go:89] found id: ""
	I1213 11:36:30.556683  536580 logs.go:282] 0 containers: []
	W1213 11:36:30.556692  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:36:30.556698  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:36:30.556764  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:36:30.581987  536580 cri.go:89] found id: ""
	I1213 11:36:30.582013  536580 logs.go:282] 0 containers: []
	W1213 11:36:30.582022  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:36:30.582029  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:36:30.582085  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:36:30.610402  536580 cri.go:89] found id: ""
	I1213 11:36:30.610430  536580 logs.go:282] 0 containers: []
	W1213 11:36:30.610439  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:36:30.610448  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:36:30.610460  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:36:30.676886  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:36:30.676926  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:36:30.692937  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:36:30.692967  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:36:30.759692  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:36:30.759713  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:36:30.759725  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:36:30.790683  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:36:30.790719  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:36:33.320566  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:33.330334  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:36:33.330407  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:36:33.359607  536580 cri.go:89] found id: ""
	I1213 11:36:33.359630  536580 logs.go:282] 0 containers: []
	W1213 11:36:33.359640  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:36:33.359647  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:36:33.359701  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:36:33.387935  536580 cri.go:89] found id: ""
	I1213 11:36:33.387961  536580 logs.go:282] 0 containers: []
	W1213 11:36:33.387971  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:36:33.387977  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:36:33.388033  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:36:33.415505  536580 cri.go:89] found id: ""
	I1213 11:36:33.415549  536580 logs.go:282] 0 containers: []
	W1213 11:36:33.415558  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:36:33.415566  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:36:33.415629  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:36:33.442988  536580 cri.go:89] found id: ""
	I1213 11:36:33.443010  536580 logs.go:282] 0 containers: []
	W1213 11:36:33.443019  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:36:33.443026  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:36:33.443093  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:36:33.468756  536580 cri.go:89] found id: ""
	I1213 11:36:33.468781  536580 logs.go:282] 0 containers: []
	W1213 11:36:33.468790  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:36:33.468797  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:36:33.468857  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:36:33.493866  536580 cri.go:89] found id: ""
	I1213 11:36:33.493889  536580 logs.go:282] 0 containers: []
	W1213 11:36:33.493899  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:36:33.493906  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:36:33.493967  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:36:33.523696  536580 cri.go:89] found id: ""
	I1213 11:36:33.523722  536580 logs.go:282] 0 containers: []
	W1213 11:36:33.523731  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:36:33.523740  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:36:33.523800  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:36:33.550860  536580 cri.go:89] found id: ""
	I1213 11:36:33.550883  536580 logs.go:282] 0 containers: []
	W1213 11:36:33.550892  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:36:33.550901  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:36:33.550912  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:36:33.579554  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:36:33.579624  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:36:33.646087  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:36:33.646126  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:36:33.662620  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:36:33.662648  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:36:33.724117  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:36:33.724136  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:36:33.724148  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:36:36.255726  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:36.275842  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:36:36.275921  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:36:36.361825  536580 cri.go:89] found id: ""
	I1213 11:36:36.361856  536580 logs.go:282] 0 containers: []
	W1213 11:36:36.361866  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:36:36.361880  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:36:36.361955  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:36:36.424024  536580 cri.go:89] found id: ""
	I1213 11:36:36.424060  536580 logs.go:282] 0 containers: []
	W1213 11:36:36.424069  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:36:36.424076  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:36:36.424154  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:36:36.459882  536580 cri.go:89] found id: ""
	I1213 11:36:36.459911  536580 logs.go:282] 0 containers: []
	W1213 11:36:36.459920  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:36:36.459927  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:36:36.459981  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:36:36.489967  536580 cri.go:89] found id: ""
	I1213 11:36:36.489994  536580 logs.go:282] 0 containers: []
	W1213 11:36:36.490003  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:36:36.490010  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:36:36.490074  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:36:36.519674  536580 cri.go:89] found id: ""
	I1213 11:36:36.519709  536580 logs.go:282] 0 containers: []
	W1213 11:36:36.519718  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:36:36.519725  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:36:36.519789  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:36:36.546861  536580 cri.go:89] found id: ""
	I1213 11:36:36.546889  536580 logs.go:282] 0 containers: []
	W1213 11:36:36.546897  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:36:36.546905  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:36:36.546971  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:36:36.572751  536580 cri.go:89] found id: ""
	I1213 11:36:36.572787  536580 logs.go:282] 0 containers: []
	W1213 11:36:36.572796  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:36:36.572804  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:36:36.572875  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:36:36.598033  536580 cri.go:89] found id: ""
	I1213 11:36:36.598102  536580 logs.go:282] 0 containers: []
	W1213 11:36:36.598129  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:36:36.598150  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:36:36.598191  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:36:36.614545  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:36:36.614574  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:36:36.681573  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:36:36.681594  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:36:36.681607  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:36:36.712320  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:36:36.712353  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:36:36.743446  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:36:36.743474  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:36:39.313337  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:39.323742  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:36:39.323808  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:36:39.354473  536580 cri.go:89] found id: ""
	I1213 11:36:39.354494  536580 logs.go:282] 0 containers: []
	W1213 11:36:39.354502  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:36:39.354509  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:36:39.354565  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:36:39.388079  536580 cri.go:89] found id: ""
	I1213 11:36:39.388107  536580 logs.go:282] 0 containers: []
	W1213 11:36:39.388117  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:36:39.388123  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:36:39.388183  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:36:39.430442  536580 cri.go:89] found id: ""
	I1213 11:36:39.430469  536580 logs.go:282] 0 containers: []
	W1213 11:36:39.430478  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:36:39.430485  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:36:39.430539  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:36:39.468462  536580 cri.go:89] found id: ""
	I1213 11:36:39.468490  536580 logs.go:282] 0 containers: []
	W1213 11:36:39.468499  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:36:39.468506  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:36:39.468615  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:36:39.503329  536580 cri.go:89] found id: ""
	I1213 11:36:39.503352  536580 logs.go:282] 0 containers: []
	W1213 11:36:39.503360  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:36:39.503371  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:36:39.503426  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:36:39.551937  536580 cri.go:89] found id: ""
	I1213 11:36:39.551960  536580 logs.go:282] 0 containers: []
	W1213 11:36:39.551968  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:36:39.551975  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:36:39.552031  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:36:39.610416  536580 cri.go:89] found id: ""
	I1213 11:36:39.610437  536580 logs.go:282] 0 containers: []
	W1213 11:36:39.610446  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:36:39.610452  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:36:39.610506  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:36:39.690222  536580 cri.go:89] found id: ""
	I1213 11:36:39.690243  536580 logs.go:282] 0 containers: []
	W1213 11:36:39.690252  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:36:39.690261  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:36:39.690272  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:36:39.771473  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:36:39.771567  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:36:39.797916  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:36:39.797947  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:36:39.900885  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:36:39.900903  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:36:39.900915  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:36:39.935719  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:36:39.935757  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:36:42.478929  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:42.489690  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:36:42.489759  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:36:42.523340  536580 cri.go:89] found id: ""
	I1213 11:36:42.523361  536580 logs.go:282] 0 containers: []
	W1213 11:36:42.523369  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:36:42.523376  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:36:42.523438  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:36:42.566470  536580 cri.go:89] found id: ""
	I1213 11:36:42.566492  536580 logs.go:282] 0 containers: []
	W1213 11:36:42.566501  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:36:42.566509  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:36:42.566568  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:36:42.604146  536580 cri.go:89] found id: ""
	I1213 11:36:42.604170  536580 logs.go:282] 0 containers: []
	W1213 11:36:42.604190  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:36:42.604197  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:36:42.604256  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:36:42.636779  536580 cri.go:89] found id: ""
	I1213 11:36:42.636801  536580 logs.go:282] 0 containers: []
	W1213 11:36:42.636810  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:36:42.636816  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:36:42.636873  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:36:42.668354  536580 cri.go:89] found id: ""
	I1213 11:36:42.668430  536580 logs.go:282] 0 containers: []
	W1213 11:36:42.668454  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:36:42.668475  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:36:42.668555  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:36:42.702200  536580 cri.go:89] found id: ""
	I1213 11:36:42.702234  536580 logs.go:282] 0 containers: []
	W1213 11:36:42.702245  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:36:42.702251  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:36:42.702339  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:36:42.746168  536580 cri.go:89] found id: ""
	I1213 11:36:42.746205  536580 logs.go:282] 0 containers: []
	W1213 11:36:42.746215  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:36:42.746239  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:36:42.746322  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:36:42.793421  536580 cri.go:89] found id: ""
	I1213 11:36:42.793456  536580 logs.go:282] 0 containers: []
	W1213 11:36:42.793466  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:36:42.793491  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:36:42.793508  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:36:42.923270  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:36:42.923288  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:36:42.923300  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:36:42.959076  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:36:42.959148  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:36:43.018863  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:36:43.018888  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:36:43.101473  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:36:43.101547  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:36:45.622960  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:45.633317  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:36:45.633391  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:36:45.660108  536580 cri.go:89] found id: ""
	I1213 11:36:45.660137  536580 logs.go:282] 0 containers: []
	W1213 11:36:45.660148  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:36:45.660156  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:36:45.660217  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:36:45.690852  536580 cri.go:89] found id: ""
	I1213 11:36:45.690879  536580 logs.go:282] 0 containers: []
	W1213 11:36:45.690889  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:36:45.690896  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:36:45.690963  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:36:45.717631  536580 cri.go:89] found id: ""
	I1213 11:36:45.717654  536580 logs.go:282] 0 containers: []
	W1213 11:36:45.717663  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:36:45.717669  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:36:45.717731  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:36:45.744366  536580 cri.go:89] found id: ""
	I1213 11:36:45.744393  536580 logs.go:282] 0 containers: []
	W1213 11:36:45.744403  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:36:45.744410  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:36:45.744470  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:36:45.776658  536580 cri.go:89] found id: ""
	I1213 11:36:45.776687  536580 logs.go:282] 0 containers: []
	W1213 11:36:45.776696  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:36:45.776703  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:36:45.776765  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:36:45.811842  536580 cri.go:89] found id: ""
	I1213 11:36:45.811866  536580 logs.go:282] 0 containers: []
	W1213 11:36:45.811886  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:36:45.811893  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:36:45.811958  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:36:45.844133  536580 cri.go:89] found id: ""
	I1213 11:36:45.844155  536580 logs.go:282] 0 containers: []
	W1213 11:36:45.844164  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:36:45.844171  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:36:45.844230  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:36:45.874846  536580 cri.go:89] found id: ""
	I1213 11:36:45.874869  536580 logs.go:282] 0 containers: []
	W1213 11:36:45.874878  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:36:45.874887  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:36:45.874900  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:36:45.942232  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:36:45.942271  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:36:45.958934  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:36:45.958963  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:36:46.024198  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:36:46.024318  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:36:46.024350  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:36:46.062132  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:36:46.062231  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:36:48.603658  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:48.613867  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:36:48.613939  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:36:48.640435  536580 cri.go:89] found id: ""
	I1213 11:36:48.640464  536580 logs.go:282] 0 containers: []
	W1213 11:36:48.640474  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:36:48.640481  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:36:48.640546  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:36:48.666770  536580 cri.go:89] found id: ""
	I1213 11:36:48.666800  536580 logs.go:282] 0 containers: []
	W1213 11:36:48.666809  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:36:48.666816  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:36:48.666871  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:36:48.692514  536580 cri.go:89] found id: ""
	I1213 11:36:48.692541  536580 logs.go:282] 0 containers: []
	W1213 11:36:48.692550  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:36:48.692557  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:36:48.692613  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:36:48.719666  536580 cri.go:89] found id: ""
	I1213 11:36:48.719690  536580 logs.go:282] 0 containers: []
	W1213 11:36:48.719700  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:36:48.719707  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:36:48.719768  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:36:48.746814  536580 cri.go:89] found id: ""
	I1213 11:36:48.746847  536580 logs.go:282] 0 containers: []
	W1213 11:36:48.746856  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:36:48.746863  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:36:48.746919  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:36:48.788141  536580 cri.go:89] found id: ""
	I1213 11:36:48.788172  536580 logs.go:282] 0 containers: []
	W1213 11:36:48.788182  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:36:48.788189  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:36:48.788244  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:36:48.822307  536580 cri.go:89] found id: ""
	I1213 11:36:48.822335  536580 logs.go:282] 0 containers: []
	W1213 11:36:48.822344  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:36:48.822351  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:36:48.822411  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:36:48.851891  536580 cri.go:89] found id: ""
	I1213 11:36:48.851919  536580 logs.go:282] 0 containers: []
	W1213 11:36:48.851929  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:36:48.851938  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:36:48.851949  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:36:48.882056  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:36:48.882087  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:36:48.948315  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:36:48.948354  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:36:48.964722  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:36:48.964750  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:36:49.033732  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:36:49.033754  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:36:49.033769  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:36:51.564431  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:51.575191  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:36:51.575263  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:36:51.601203  536580 cri.go:89] found id: ""
	I1213 11:36:51.601229  536580 logs.go:282] 0 containers: []
	W1213 11:36:51.601238  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:36:51.601244  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:36:51.601303  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:36:51.626320  536580 cri.go:89] found id: ""
	I1213 11:36:51.626347  536580 logs.go:282] 0 containers: []
	W1213 11:36:51.626356  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:36:51.626363  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:36:51.626417  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:36:51.652806  536580 cri.go:89] found id: ""
	I1213 11:36:51.652833  536580 logs.go:282] 0 containers: []
	W1213 11:36:51.652843  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:36:51.652849  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:36:51.652906  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:36:51.678125  536580 cri.go:89] found id: ""
	I1213 11:36:51.678149  536580 logs.go:282] 0 containers: []
	W1213 11:36:51.678159  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:36:51.678165  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:36:51.678225  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:36:51.706066  536580 cri.go:89] found id: ""
	I1213 11:36:51.706088  536580 logs.go:282] 0 containers: []
	W1213 11:36:51.706096  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:36:51.706103  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:36:51.706165  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:36:51.732720  536580 cri.go:89] found id: ""
	I1213 11:36:51.732744  536580 logs.go:282] 0 containers: []
	W1213 11:36:51.732753  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:36:51.732761  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:36:51.732821  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:36:51.758083  536580 cri.go:89] found id: ""
	I1213 11:36:51.758166  536580 logs.go:282] 0 containers: []
	W1213 11:36:51.758179  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:36:51.758186  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:36:51.758277  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:36:51.804840  536580 cri.go:89] found id: ""
	I1213 11:36:51.804861  536580 logs.go:282] 0 containers: []
	W1213 11:36:51.804870  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:36:51.804878  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:36:51.804889  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:36:51.881801  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:36:51.881837  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:36:51.898575  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:36:51.898605  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:36:51.965798  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:36:51.965815  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:36:51.965836  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:36:51.997071  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:36:51.997106  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:36:54.535048  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:54.545497  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:36:54.545570  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:36:54.572637  536580 cri.go:89] found id: ""
	I1213 11:36:54.572662  536580 logs.go:282] 0 containers: []
	W1213 11:36:54.572671  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:36:54.572677  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:36:54.572735  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:36:54.597794  536580 cri.go:89] found id: ""
	I1213 11:36:54.597818  536580 logs.go:282] 0 containers: []
	W1213 11:36:54.597827  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:36:54.597833  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:36:54.597895  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:36:54.629327  536580 cri.go:89] found id: ""
	I1213 11:36:54.629350  536580 logs.go:282] 0 containers: []
	W1213 11:36:54.629359  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:36:54.629365  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:36:54.629428  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:36:54.658859  536580 cri.go:89] found id: ""
	I1213 11:36:54.658885  536580 logs.go:282] 0 containers: []
	W1213 11:36:54.658895  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:36:54.658902  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:36:54.658957  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:36:54.684228  536580 cri.go:89] found id: ""
	I1213 11:36:54.684253  536580 logs.go:282] 0 containers: []
	W1213 11:36:54.684262  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:36:54.684271  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:36:54.684338  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:36:54.711700  536580 cri.go:89] found id: ""
	I1213 11:36:54.711722  536580 logs.go:282] 0 containers: []
	W1213 11:36:54.711731  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:36:54.711737  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:36:54.711792  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:36:54.737437  536580 cri.go:89] found id: ""
	I1213 11:36:54.737460  536580 logs.go:282] 0 containers: []
	W1213 11:36:54.737468  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:36:54.737475  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:36:54.737530  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:36:54.764418  536580 cri.go:89] found id: ""
	I1213 11:36:54.764492  536580 logs.go:282] 0 containers: []
	W1213 11:36:54.764516  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:36:54.764539  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:36:54.764579  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:36:54.853767  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:36:54.853817  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:36:54.870950  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:36:54.870989  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:36:54.936240  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:36:54.936263  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:36:54.936279  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:36:54.967117  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:36:54.967151  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:36:57.500057  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:36:57.516734  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:36:57.516816  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:36:57.566392  536580 cri.go:89] found id: ""
	I1213 11:36:57.566421  536580 logs.go:282] 0 containers: []
	W1213 11:36:57.566443  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:36:57.566450  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:36:57.566507  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:36:57.598588  536580 cri.go:89] found id: ""
	I1213 11:36:57.598616  536580 logs.go:282] 0 containers: []
	W1213 11:36:57.598634  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:36:57.598642  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:36:57.598721  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:36:57.638430  536580 cri.go:89] found id: ""
	I1213 11:36:57.638458  536580 logs.go:282] 0 containers: []
	W1213 11:36:57.638468  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:36:57.638479  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:36:57.638544  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:36:57.671838  536580 cri.go:89] found id: ""
	I1213 11:36:57.671910  536580 logs.go:282] 0 containers: []
	W1213 11:36:57.671943  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:36:57.671963  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:36:57.672065  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:36:57.697703  536580 cri.go:89] found id: ""
	I1213 11:36:57.697778  536580 logs.go:282] 0 containers: []
	W1213 11:36:57.697809  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:36:57.697852  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:36:57.697950  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:36:57.723918  536580 cri.go:89] found id: ""
	I1213 11:36:57.724000  536580 logs.go:282] 0 containers: []
	W1213 11:36:57.724024  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:36:57.724044  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:36:57.724145  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:36:57.753093  536580 cri.go:89] found id: ""
	I1213 11:36:57.753172  536580 logs.go:282] 0 containers: []
	W1213 11:36:57.753197  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:36:57.753216  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:36:57.753302  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:36:57.798923  536580 cri.go:89] found id: ""
	I1213 11:36:57.798999  536580 logs.go:282] 0 containers: []
	W1213 11:36:57.799034  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:36:57.799060  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:36:57.799085  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:36:57.919115  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:36:57.919200  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:36:57.936041  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:36:57.936064  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:36:58.039179  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:36:58.039251  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:36:58.039278  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:36:58.073666  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:36:58.073695  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:37:00.612916  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:37:00.622842  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:37:00.622912  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:37:00.648480  536580 cri.go:89] found id: ""
	I1213 11:37:00.648501  536580 logs.go:282] 0 containers: []
	W1213 11:37:00.648510  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:37:00.648516  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:37:00.648575  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:37:00.673825  536580 cri.go:89] found id: ""
	I1213 11:37:00.673848  536580 logs.go:282] 0 containers: []
	W1213 11:37:00.673857  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:37:00.673863  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:37:00.673921  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:37:00.701273  536580 cri.go:89] found id: ""
	I1213 11:37:00.701298  536580 logs.go:282] 0 containers: []
	W1213 11:37:00.701308  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:37:00.701315  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:37:00.701375  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:37:00.730438  536580 cri.go:89] found id: ""
	I1213 11:37:00.730460  536580 logs.go:282] 0 containers: []
	W1213 11:37:00.730469  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:37:00.730475  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:37:00.730535  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:37:00.758822  536580 cri.go:89] found id: ""
	I1213 11:37:00.758847  536580 logs.go:282] 0 containers: []
	W1213 11:37:00.758857  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:37:00.758864  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:37:00.758919  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:37:00.794866  536580 cri.go:89] found id: ""
	I1213 11:37:00.794895  536580 logs.go:282] 0 containers: []
	W1213 11:37:00.794904  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:37:00.794918  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:37:00.794974  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:37:00.824841  536580 cri.go:89] found id: ""
	I1213 11:37:00.824866  536580 logs.go:282] 0 containers: []
	W1213 11:37:00.824875  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:37:00.824881  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:37:00.824938  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:37:00.855975  536580 cri.go:89] found id: ""
	I1213 11:37:00.855999  536580 logs.go:282] 0 containers: []
	W1213 11:37:00.856007  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:37:00.856017  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:37:00.856028  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:37:00.922537  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:37:00.922557  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:37:00.922570  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:37:00.953091  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:37:00.953125  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:37:00.981667  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:37:00.981696  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:37:01.049096  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:37:01.049133  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:37:03.568310  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:37:03.578679  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:37:03.578762  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:37:03.606080  536580 cri.go:89] found id: ""
	I1213 11:37:03.606105  536580 logs.go:282] 0 containers: []
	W1213 11:37:03.606114  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:37:03.606121  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:37:03.606176  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:37:03.632023  536580 cri.go:89] found id: ""
	I1213 11:37:03.632091  536580 logs.go:282] 0 containers: []
	W1213 11:37:03.632115  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:37:03.632128  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:37:03.632201  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:37:03.658748  536580 cri.go:89] found id: ""
	I1213 11:37:03.658783  536580 logs.go:282] 0 containers: []
	W1213 11:37:03.658793  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:37:03.658800  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:37:03.658868  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:37:03.686549  536580 cri.go:89] found id: ""
	I1213 11:37:03.686580  536580 logs.go:282] 0 containers: []
	W1213 11:37:03.686590  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:37:03.686596  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:37:03.686682  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:37:03.717571  536580 cri.go:89] found id: ""
	I1213 11:37:03.717646  536580 logs.go:282] 0 containers: []
	W1213 11:37:03.717671  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:37:03.717695  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:37:03.717781  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:37:03.745125  536580 cri.go:89] found id: ""
	I1213 11:37:03.745191  536580 logs.go:282] 0 containers: []
	W1213 11:37:03.745214  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:37:03.745227  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:37:03.745307  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:37:03.782899  536580 cri.go:89] found id: ""
	I1213 11:37:03.782926  536580 logs.go:282] 0 containers: []
	W1213 11:37:03.782935  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:37:03.782942  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:37:03.782999  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:37:03.815740  536580 cri.go:89] found id: ""
	I1213 11:37:03.815766  536580 logs.go:282] 0 containers: []
	W1213 11:37:03.815775  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:37:03.815783  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:37:03.815794  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:37:03.851317  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:37:03.851358  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:37:03.884049  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:37:03.884075  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:37:03.951563  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:37:03.951603  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:37:03.967852  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:37:03.967881  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:37:04.032985  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:37:06.533239  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:37:06.545157  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:37:06.545231  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:37:06.571904  536580 cri.go:89] found id: ""
	I1213 11:37:06.571931  536580 logs.go:282] 0 containers: []
	W1213 11:37:06.571940  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:37:06.571948  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:37:06.572007  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:37:06.598546  536580 cri.go:89] found id: ""
	I1213 11:37:06.598573  536580 logs.go:282] 0 containers: []
	W1213 11:37:06.598583  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:37:06.598589  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:37:06.598649  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:37:06.625589  536580 cri.go:89] found id: ""
	I1213 11:37:06.625613  536580 logs.go:282] 0 containers: []
	W1213 11:37:06.625622  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:37:06.625628  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:37:06.625685  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:37:06.652472  536580 cri.go:89] found id: ""
	I1213 11:37:06.652498  536580 logs.go:282] 0 containers: []
	W1213 11:37:06.652508  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:37:06.652515  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:37:06.652570  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:37:06.679540  536580 cri.go:89] found id: ""
	I1213 11:37:06.679571  536580 logs.go:282] 0 containers: []
	W1213 11:37:06.679581  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:37:06.679587  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:37:06.679651  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:37:06.705483  536580 cri.go:89] found id: ""
	I1213 11:37:06.705548  536580 logs.go:282] 0 containers: []
	W1213 11:37:06.705562  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:37:06.705569  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:37:06.705626  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:37:06.731682  536580 cri.go:89] found id: ""
	I1213 11:37:06.731706  536580 logs.go:282] 0 containers: []
	W1213 11:37:06.731714  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:37:06.731720  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:37:06.731782  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:37:06.758123  536580 cri.go:89] found id: ""
	I1213 11:37:06.758148  536580 logs.go:282] 0 containers: []
	W1213 11:37:06.758158  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:37:06.758167  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:37:06.758180  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:37:06.789885  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:37:06.789926  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:37:06.833331  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:37:06.833359  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:37:06.903104  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:37:06.903142  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:37:06.919183  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:37:06.919261  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:37:06.986824  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:37:09.488538  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:37:09.499953  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:37:09.500019  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:37:09.525693  536580 cri.go:89] found id: ""
	I1213 11:37:09.525718  536580 logs.go:282] 0 containers: []
	W1213 11:37:09.525727  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:37:09.525734  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:37:09.525789  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:37:09.550957  536580 cri.go:89] found id: ""
	I1213 11:37:09.550984  536580 logs.go:282] 0 containers: []
	W1213 11:37:09.550993  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:37:09.550999  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:37:09.551058  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:37:09.576029  536580 cri.go:89] found id: ""
	I1213 11:37:09.576057  536580 logs.go:282] 0 containers: []
	W1213 11:37:09.576066  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:37:09.576072  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:37:09.576129  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:37:09.602335  536580 cri.go:89] found id: ""
	I1213 11:37:09.602360  536580 logs.go:282] 0 containers: []
	W1213 11:37:09.602369  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:37:09.602376  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:37:09.602432  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:37:09.630468  536580 cri.go:89] found id: ""
	I1213 11:37:09.630492  536580 logs.go:282] 0 containers: []
	W1213 11:37:09.630501  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:37:09.630507  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:37:09.630561  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:37:09.656311  536580 cri.go:89] found id: ""
	I1213 11:37:09.656337  536580 logs.go:282] 0 containers: []
	W1213 11:37:09.656347  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:37:09.656354  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:37:09.656417  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:37:09.681864  536580 cri.go:89] found id: ""
	I1213 11:37:09.681891  536580 logs.go:282] 0 containers: []
	W1213 11:37:09.681901  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:37:09.681907  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:37:09.681963  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:37:09.713146  536580 cri.go:89] found id: ""
	I1213 11:37:09.713173  536580 logs.go:282] 0 containers: []
	W1213 11:37:09.713181  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:37:09.713191  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:37:09.713201  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:37:09.779602  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:37:09.779687  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:37:09.802112  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:37:09.802139  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:37:09.875856  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:37:09.875881  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:37:09.875894  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:37:09.907560  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:37:09.907593  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:37:12.436202  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:37:12.446603  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:37:12.446673  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:37:12.474071  536580 cri.go:89] found id: ""
	I1213 11:37:12.474093  536580 logs.go:282] 0 containers: []
	W1213 11:37:12.474103  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:37:12.474111  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:37:12.474171  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:37:12.501027  536580 cri.go:89] found id: ""
	I1213 11:37:12.501048  536580 logs.go:282] 0 containers: []
	W1213 11:37:12.501058  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:37:12.501064  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:37:12.501122  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:37:12.527929  536580 cri.go:89] found id: ""
	I1213 11:37:12.527957  536580 logs.go:282] 0 containers: []
	W1213 11:37:12.527966  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:37:12.527973  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:37:12.528037  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:37:12.553625  536580 cri.go:89] found id: ""
	I1213 11:37:12.553695  536580 logs.go:282] 0 containers: []
	W1213 11:37:12.553718  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:37:12.553738  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:37:12.553809  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:37:12.579200  536580 cri.go:89] found id: ""
	I1213 11:37:12.579228  536580 logs.go:282] 0 containers: []
	W1213 11:37:12.579238  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:37:12.579245  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:37:12.579332  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:37:12.606154  536580 cri.go:89] found id: ""
	I1213 11:37:12.606181  536580 logs.go:282] 0 containers: []
	W1213 11:37:12.606190  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:37:12.606196  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:37:12.606252  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:37:12.631449  536580 cri.go:89] found id: ""
	I1213 11:37:12.631479  536580 logs.go:282] 0 containers: []
	W1213 11:37:12.631488  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:37:12.631495  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:37:12.631588  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:37:12.656365  536580 cri.go:89] found id: ""
	I1213 11:37:12.656388  536580 logs.go:282] 0 containers: []
	W1213 11:37:12.656398  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:37:12.656409  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:37:12.656422  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:37:12.686891  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:37:12.686926  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:37:12.719359  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:37:12.719386  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:37:12.804115  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:37:12.804218  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:37:12.826082  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:37:12.826201  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:37:12.894125  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:37:15.395002  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:37:15.405356  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:37:15.405428  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:37:15.431727  536580 cri.go:89] found id: ""
	I1213 11:37:15.431748  536580 logs.go:282] 0 containers: []
	W1213 11:37:15.431757  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:37:15.431763  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:37:15.431825  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:37:15.457199  536580 cri.go:89] found id: ""
	I1213 11:37:15.457228  536580 logs.go:282] 0 containers: []
	W1213 11:37:15.457238  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:37:15.457246  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:37:15.457301  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:37:15.485914  536580 cri.go:89] found id: ""
	I1213 11:37:15.485940  536580 logs.go:282] 0 containers: []
	W1213 11:37:15.485950  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:37:15.485957  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:37:15.486015  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:37:15.512535  536580 cri.go:89] found id: ""
	I1213 11:37:15.512563  536580 logs.go:282] 0 containers: []
	W1213 11:37:15.512572  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:37:15.512579  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:37:15.512650  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:37:15.542017  536580 cri.go:89] found id: ""
	I1213 11:37:15.542040  536580 logs.go:282] 0 containers: []
	W1213 11:37:15.542049  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:37:15.542056  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:37:15.542121  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:37:15.568488  536580 cri.go:89] found id: ""
	I1213 11:37:15.568511  536580 logs.go:282] 0 containers: []
	W1213 11:37:15.568519  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:37:15.568525  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:37:15.568581  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:37:15.594701  536580 cri.go:89] found id: ""
	I1213 11:37:15.594723  536580 logs.go:282] 0 containers: []
	W1213 11:37:15.594733  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:37:15.594739  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:37:15.594811  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:37:15.620637  536580 cri.go:89] found id: ""
	I1213 11:37:15.620663  536580 logs.go:282] 0 containers: []
	W1213 11:37:15.620673  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:37:15.620683  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:37:15.620695  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:37:15.686917  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:37:15.686956  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:37:15.704537  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:37:15.704568  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:37:15.773333  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:37:15.773357  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:37:15.773369  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:37:15.814531  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:37:15.814607  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:37:18.349616  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:37:18.360127  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:37:18.360204  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:37:18.388982  536580 cri.go:89] found id: ""
	I1213 11:37:18.389010  536580 logs.go:282] 0 containers: []
	W1213 11:37:18.389020  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:37:18.389027  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:37:18.389087  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:37:18.417848  536580 cri.go:89] found id: ""
	I1213 11:37:18.417874  536580 logs.go:282] 0 containers: []
	W1213 11:37:18.417884  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:37:18.417891  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:37:18.417943  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:37:18.447325  536580 cri.go:89] found id: ""
	I1213 11:37:18.447352  536580 logs.go:282] 0 containers: []
	W1213 11:37:18.447362  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:37:18.447368  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:37:18.447430  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:37:18.473681  536580 cri.go:89] found id: ""
	I1213 11:37:18.473707  536580 logs.go:282] 0 containers: []
	W1213 11:37:18.473716  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:37:18.473724  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:37:18.473778  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:37:18.500701  536580 cri.go:89] found id: ""
	I1213 11:37:18.500722  536580 logs.go:282] 0 containers: []
	W1213 11:37:18.500731  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:37:18.500737  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:37:18.500796  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:37:18.527440  536580 cri.go:89] found id: ""
	I1213 11:37:18.527467  536580 logs.go:282] 0 containers: []
	W1213 11:37:18.527476  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:37:18.527483  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:37:18.527570  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:37:18.554134  536580 cri.go:89] found id: ""
	I1213 11:37:18.554163  536580 logs.go:282] 0 containers: []
	W1213 11:37:18.554172  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:37:18.554178  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:37:18.554238  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:37:18.580292  536580 cri.go:89] found id: ""
	I1213 11:37:18.580323  536580 logs.go:282] 0 containers: []
	W1213 11:37:18.580331  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:37:18.580340  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:37:18.580352  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:37:18.651730  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:37:18.651768  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:37:18.667974  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:37:18.668005  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:37:18.734925  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:37:18.735001  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:37:18.735033  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:37:18.765647  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:37:18.765682  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:37:21.307401  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:37:21.317636  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:37:21.317709  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:37:21.344857  536580 cri.go:89] found id: ""
	I1213 11:37:21.344887  536580 logs.go:282] 0 containers: []
	W1213 11:37:21.344896  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:37:21.344902  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:37:21.344963  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:37:21.373988  536580 cri.go:89] found id: ""
	I1213 11:37:21.374011  536580 logs.go:282] 0 containers: []
	W1213 11:37:21.374019  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:37:21.374026  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:37:21.374086  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:37:21.403421  536580 cri.go:89] found id: ""
	I1213 11:37:21.403442  536580 logs.go:282] 0 containers: []
	W1213 11:37:21.403451  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:37:21.403457  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:37:21.403540  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:37:21.428677  536580 cri.go:89] found id: ""
	I1213 11:37:21.428704  536580 logs.go:282] 0 containers: []
	W1213 11:37:21.428713  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:37:21.428720  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:37:21.428784  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:37:21.454867  536580 cri.go:89] found id: ""
	I1213 11:37:21.454890  536580 logs.go:282] 0 containers: []
	W1213 11:37:21.454898  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:37:21.454906  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:37:21.454961  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:37:21.480691  536580 cri.go:89] found id: ""
	I1213 11:37:21.480718  536580 logs.go:282] 0 containers: []
	W1213 11:37:21.480727  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:37:21.480734  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:37:21.480791  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:37:21.512055  536580 cri.go:89] found id: ""
	I1213 11:37:21.512080  536580 logs.go:282] 0 containers: []
	W1213 11:37:21.512089  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:37:21.512095  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:37:21.512152  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:37:21.537106  536580 cri.go:89] found id: ""
	I1213 11:37:21.537132  536580 logs.go:282] 0 containers: []
	W1213 11:37:21.537142  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:37:21.537151  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:37:21.537161  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:37:21.604139  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:37:21.604176  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:37:21.620723  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:37:21.620752  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:37:21.686073  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:37:21.686093  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:37:21.686104  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:37:21.717539  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:37:21.717574  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:37:24.247632  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:37:24.258701  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:37:24.258772  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:37:24.295746  536580 cri.go:89] found id: ""
	I1213 11:37:24.295768  536580 logs.go:282] 0 containers: []
	W1213 11:37:24.295777  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:37:24.295783  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:37:24.295841  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:37:24.337322  536580 cri.go:89] found id: ""
	I1213 11:37:24.337348  536580 logs.go:282] 0 containers: []
	W1213 11:37:24.337378  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:37:24.337391  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:37:24.337456  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:37:24.368807  536580 cri.go:89] found id: ""
	I1213 11:37:24.368836  536580 logs.go:282] 0 containers: []
	W1213 11:37:24.368846  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:37:24.368852  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:37:24.368907  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:37:24.409546  536580 cri.go:89] found id: ""
	I1213 11:37:24.409575  536580 logs.go:282] 0 containers: []
	W1213 11:37:24.409585  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:37:24.409592  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:37:24.409653  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:37:24.446958  536580 cri.go:89] found id: ""
	I1213 11:37:24.446984  536580 logs.go:282] 0 containers: []
	W1213 11:37:24.446993  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:37:24.447000  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:37:24.447056  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:37:24.477304  536580 cri.go:89] found id: ""
	I1213 11:37:24.477325  536580 logs.go:282] 0 containers: []
	W1213 11:37:24.477334  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:37:24.477340  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:37:24.477394  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:37:24.516721  536580 cri.go:89] found id: ""
	I1213 11:37:24.516742  536580 logs.go:282] 0 containers: []
	W1213 11:37:24.516751  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:37:24.516757  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:37:24.516898  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:37:24.550739  536580 cri.go:89] found id: ""
	I1213 11:37:24.550759  536580 logs.go:282] 0 containers: []
	W1213 11:37:24.550767  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:37:24.550776  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:37:24.550790  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:37:24.572692  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:37:24.572722  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:37:24.655602  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:37:24.655625  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:37:24.655648  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:37:24.693006  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:37:24.693043  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:37:24.728885  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:37:24.728910  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:37:27.327659  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:37:27.339044  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:37:27.339114  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:37:27.394510  536580 cri.go:89] found id: ""
	I1213 11:37:27.394532  536580 logs.go:282] 0 containers: []
	W1213 11:37:27.394540  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:37:27.394546  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:37:27.394600  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:37:27.423445  536580 cri.go:89] found id: ""
	I1213 11:37:27.423468  536580 logs.go:282] 0 containers: []
	W1213 11:37:27.423476  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:37:27.423483  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:37:27.423616  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:37:27.461385  536580 cri.go:89] found id: ""
	I1213 11:37:27.461406  536580 logs.go:282] 0 containers: []
	W1213 11:37:27.461415  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:37:27.461422  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:37:27.461477  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:37:27.497264  536580 cri.go:89] found id: ""
	I1213 11:37:27.497291  536580 logs.go:282] 0 containers: []
	W1213 11:37:27.497301  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:37:27.497309  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:37:27.497366  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:37:27.536576  536580 cri.go:89] found id: ""
	I1213 11:37:27.536604  536580 logs.go:282] 0 containers: []
	W1213 11:37:27.536616  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:37:27.536623  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:37:27.536681  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:37:27.570505  536580 cri.go:89] found id: ""
	I1213 11:37:27.570531  536580 logs.go:282] 0 containers: []
	W1213 11:37:27.570539  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:37:27.570546  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:37:27.570602  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:37:27.602718  536580 cri.go:89] found id: ""
	I1213 11:37:27.602746  536580 logs.go:282] 0 containers: []
	W1213 11:37:27.602756  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:37:27.602762  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:37:27.602819  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:37:27.634670  536580 cri.go:89] found id: ""
	I1213 11:37:27.634698  536580 logs.go:282] 0 containers: []
	W1213 11:37:27.634708  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:37:27.634717  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:37:27.634730  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:37:27.711344  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:37:27.711381  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:37:27.728683  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:37:27.728713  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:37:27.827624  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:37:27.827646  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:37:27.827658  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:37:27.889342  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:37:27.889378  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:37:30.434511  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:37:30.444735  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:37:30.444817  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:37:30.471171  536580 cri.go:89] found id: ""
	I1213 11:37:30.471195  536580 logs.go:282] 0 containers: []
	W1213 11:37:30.471204  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:37:30.471210  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:37:30.471282  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:37:30.500924  536580 cri.go:89] found id: ""
	I1213 11:37:30.500954  536580 logs.go:282] 0 containers: []
	W1213 11:37:30.500963  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:37:30.500969  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:37:30.501025  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:37:30.526511  536580 cri.go:89] found id: ""
	I1213 11:37:30.526533  536580 logs.go:282] 0 containers: []
	W1213 11:37:30.526543  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:37:30.526549  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:37:30.526604  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:37:30.552185  536580 cri.go:89] found id: ""
	I1213 11:37:30.552211  536580 logs.go:282] 0 containers: []
	W1213 11:37:30.552220  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:37:30.552227  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:37:30.552285  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:37:30.577465  536580 cri.go:89] found id: ""
	I1213 11:37:30.577487  536580 logs.go:282] 0 containers: []
	W1213 11:37:30.577495  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:37:30.577502  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:37:30.577559  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:37:30.603393  536580 cri.go:89] found id: ""
	I1213 11:37:30.603414  536580 logs.go:282] 0 containers: []
	W1213 11:37:30.603423  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:37:30.603430  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:37:30.603484  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:37:30.628968  536580 cri.go:89] found id: ""
	I1213 11:37:30.628994  536580 logs.go:282] 0 containers: []
	W1213 11:37:30.629004  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:37:30.629011  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:37:30.629100  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:37:30.655199  536580 cri.go:89] found id: ""
	I1213 11:37:30.655227  536580 logs.go:282] 0 containers: []
	W1213 11:37:30.655236  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:37:30.655246  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:37:30.655258  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:37:30.722061  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:37:30.722100  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:37:30.738292  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:37:30.738320  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:37:30.809923  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:37:30.809946  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:37:30.809959  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:37:30.846516  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:37:30.846551  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:37:33.379658  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:37:33.390057  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:37:33.390128  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:37:33.416536  536580 cri.go:89] found id: ""
	I1213 11:37:33.416560  536580 logs.go:282] 0 containers: []
	W1213 11:37:33.416569  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:37:33.416575  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:37:33.416647  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:37:33.441739  536580 cri.go:89] found id: ""
	I1213 11:37:33.441763  536580 logs.go:282] 0 containers: []
	W1213 11:37:33.441772  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:37:33.441779  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:37:33.441837  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:37:33.470300  536580 cri.go:89] found id: ""
	I1213 11:37:33.470323  536580 logs.go:282] 0 containers: []
	W1213 11:37:33.470332  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:37:33.470338  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:37:33.470405  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:37:33.498066  536580 cri.go:89] found id: ""
	I1213 11:37:33.498093  536580 logs.go:282] 0 containers: []
	W1213 11:37:33.498103  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:37:33.498110  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:37:33.498169  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:37:33.526477  536580 cri.go:89] found id: ""
	I1213 11:37:33.526505  536580 logs.go:282] 0 containers: []
	W1213 11:37:33.526514  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:37:33.526522  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:37:33.526576  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:37:33.552979  536580 cri.go:89] found id: ""
	I1213 11:37:33.553000  536580 logs.go:282] 0 containers: []
	W1213 11:37:33.553008  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:37:33.553015  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:37:33.553072  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:37:33.578901  536580 cri.go:89] found id: ""
	I1213 11:37:33.578924  536580 logs.go:282] 0 containers: []
	W1213 11:37:33.578932  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:37:33.578938  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:37:33.578994  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:37:33.607018  536580 cri.go:89] found id: ""
	I1213 11:37:33.607040  536580 logs.go:282] 0 containers: []
	W1213 11:37:33.607049  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:37:33.607058  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:37:33.607070  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:37:33.673497  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:37:33.673533  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:37:33.689604  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:37:33.689633  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:37:33.758425  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:37:33.758495  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:37:33.758523  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:37:33.796208  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:37:33.796289  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:37:36.330370  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:37:36.340746  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:37:36.340815  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:37:36.366649  536580 cri.go:89] found id: ""
	I1213 11:37:36.366674  536580 logs.go:282] 0 containers: []
	W1213 11:37:36.366683  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:37:36.366689  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:37:36.366743  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:37:36.401275  536580 cri.go:89] found id: ""
	I1213 11:37:36.401301  536580 logs.go:282] 0 containers: []
	W1213 11:37:36.401310  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:37:36.401317  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:37:36.401374  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:37:36.426843  536580 cri.go:89] found id: ""
	I1213 11:37:36.426870  536580 logs.go:282] 0 containers: []
	W1213 11:37:36.426880  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:37:36.426886  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:37:36.426943  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:37:36.453432  536580 cri.go:89] found id: ""
	I1213 11:37:36.453456  536580 logs.go:282] 0 containers: []
	W1213 11:37:36.453465  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:37:36.453472  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:37:36.453597  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:37:36.482577  536580 cri.go:89] found id: ""
	I1213 11:37:36.482602  536580 logs.go:282] 0 containers: []
	W1213 11:37:36.482611  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:37:36.482618  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:37:36.482674  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:37:36.511571  536580 cri.go:89] found id: ""
	I1213 11:37:36.511614  536580 logs.go:282] 0 containers: []
	W1213 11:37:36.511625  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:37:36.511636  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:37:36.511699  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:37:36.538729  536580 cri.go:89] found id: ""
	I1213 11:37:36.538757  536580 logs.go:282] 0 containers: []
	W1213 11:37:36.538766  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:37:36.538773  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:37:36.538828  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:37:36.568610  536580 cri.go:89] found id: ""
	I1213 11:37:36.568637  536580 logs.go:282] 0 containers: []
	W1213 11:37:36.568648  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:37:36.568658  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:37:36.568670  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:37:36.600929  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:37:36.600958  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:37:36.672757  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:37:36.672803  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:37:36.689167  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:37:36.689200  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:37:36.753310  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:37:36.753331  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:37:36.753346  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:37:39.286661  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:37:39.296602  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:37:39.296671  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:37:39.322692  536580 cri.go:89] found id: ""
	I1213 11:37:39.322717  536580 logs.go:282] 0 containers: []
	W1213 11:37:39.322726  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:37:39.322733  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:37:39.322789  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:37:39.349259  536580 cri.go:89] found id: ""
	I1213 11:37:39.349283  536580 logs.go:282] 0 containers: []
	W1213 11:37:39.349291  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:37:39.349298  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:37:39.349352  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:37:39.380061  536580 cri.go:89] found id: ""
	I1213 11:37:39.380088  536580 logs.go:282] 0 containers: []
	W1213 11:37:39.380097  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:37:39.380104  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:37:39.380163  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:37:39.410936  536580 cri.go:89] found id: ""
	I1213 11:37:39.410959  536580 logs.go:282] 0 containers: []
	W1213 11:37:39.410980  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:37:39.410987  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:37:39.411049  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:37:39.437175  536580 cri.go:89] found id: ""
	I1213 11:37:39.437204  536580 logs.go:282] 0 containers: []
	W1213 11:37:39.437214  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:37:39.437221  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:37:39.437281  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:37:39.467433  536580 cri.go:89] found id: ""
	I1213 11:37:39.467459  536580 logs.go:282] 0 containers: []
	W1213 11:37:39.467468  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:37:39.467475  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:37:39.467555  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:37:39.494350  536580 cri.go:89] found id: ""
	I1213 11:37:39.494376  536580 logs.go:282] 0 containers: []
	W1213 11:37:39.494385  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:37:39.494392  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:37:39.494446  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:37:39.522734  536580 cri.go:89] found id: ""
	I1213 11:37:39.522756  536580 logs.go:282] 0 containers: []
	W1213 11:37:39.522765  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:37:39.522773  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:37:39.522785  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:37:39.591126  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:37:39.591161  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:37:39.606759  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:37:39.606793  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:37:39.670636  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:37:39.670660  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:37:39.670672  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:37:39.702032  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:37:39.702070  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:37:42.232303  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:37:42.245096  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:37:42.245177  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:37:42.273549  536580 cri.go:89] found id: ""
	I1213 11:37:42.273578  536580 logs.go:282] 0 containers: []
	W1213 11:37:42.273588  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:37:42.273595  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:37:42.273657  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:37:42.302093  536580 cri.go:89] found id: ""
	I1213 11:37:42.302122  536580 logs.go:282] 0 containers: []
	W1213 11:37:42.302133  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:37:42.302141  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:37:42.302205  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:37:42.330150  536580 cri.go:89] found id: ""
	I1213 11:37:42.330175  536580 logs.go:282] 0 containers: []
	W1213 11:37:42.330184  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:37:42.330191  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:37:42.330260  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:37:42.360987  536580 cri.go:89] found id: ""
	I1213 11:37:42.361017  536580 logs.go:282] 0 containers: []
	W1213 11:37:42.361027  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:37:42.361034  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:37:42.361092  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:37:42.393773  536580 cri.go:89] found id: ""
	I1213 11:37:42.393799  536580 logs.go:282] 0 containers: []
	W1213 11:37:42.393808  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:37:42.393815  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:37:42.393874  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:37:42.419080  536580 cri.go:89] found id: ""
	I1213 11:37:42.419105  536580 logs.go:282] 0 containers: []
	W1213 11:37:42.419114  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:37:42.419120  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:37:42.419176  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:37:42.450177  536580 cri.go:89] found id: ""
	I1213 11:37:42.450203  536580 logs.go:282] 0 containers: []
	W1213 11:37:42.450212  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:37:42.450219  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:37:42.450274  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:37:42.477215  536580 cri.go:89] found id: ""
	I1213 11:37:42.477239  536580 logs.go:282] 0 containers: []
	W1213 11:37:42.477249  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:37:42.477258  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:37:42.477269  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:37:42.508233  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:37:42.508270  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:37:42.541703  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:37:42.541734  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:37:42.608477  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:37:42.608514  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:37:42.624706  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:37:42.624738  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:37:42.691354  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:37:45.191657  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:37:45.223676  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:37:45.223765  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:37:45.268571  536580 cri.go:89] found id: ""
	I1213 11:37:45.268649  536580 logs.go:282] 0 containers: []
	W1213 11:37:45.268673  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:37:45.268695  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:37:45.268787  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:37:45.312498  536580 cri.go:89] found id: ""
	I1213 11:37:45.312518  536580 logs.go:282] 0 containers: []
	W1213 11:37:45.312527  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:37:45.312533  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:37:45.312590  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:37:45.361777  536580 cri.go:89] found id: ""
	I1213 11:37:45.361800  536580 logs.go:282] 0 containers: []
	W1213 11:37:45.361808  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:37:45.361815  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:37:45.361872  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:37:45.398832  536580 cri.go:89] found id: ""
	I1213 11:37:45.398852  536580 logs.go:282] 0 containers: []
	W1213 11:37:45.398861  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:37:45.398867  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:37:45.398921  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:37:45.426593  536580 cri.go:89] found id: ""
	I1213 11:37:45.426614  536580 logs.go:282] 0 containers: []
	W1213 11:37:45.426623  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:37:45.426629  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:37:45.426686  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:37:45.455839  536580 cri.go:89] found id: ""
	I1213 11:37:45.455859  536580 logs.go:282] 0 containers: []
	W1213 11:37:45.455868  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:37:45.455880  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:37:45.455934  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:37:45.488619  536580 cri.go:89] found id: ""
	I1213 11:37:45.488640  536580 logs.go:282] 0 containers: []
	W1213 11:37:45.488649  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:37:45.488655  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:37:45.488710  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:37:45.519195  536580 cri.go:89] found id: ""
	I1213 11:37:45.519223  536580 logs.go:282] 0 containers: []
	W1213 11:37:45.519232  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:37:45.519241  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:37:45.519252  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:37:45.599611  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:37:45.599647  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:37:45.626873  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:37:45.626908  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:37:45.727348  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:37:45.727372  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:37:45.727385  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:37:45.757899  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:37:45.757932  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:37:48.299627  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:37:48.312795  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:37:48.312861  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:37:48.346049  536580 cri.go:89] found id: ""
	I1213 11:37:48.346070  536580 logs.go:282] 0 containers: []
	W1213 11:37:48.346079  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:37:48.346085  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:37:48.346144  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:37:48.390895  536580 cri.go:89] found id: ""
	I1213 11:37:48.390915  536580 logs.go:282] 0 containers: []
	W1213 11:37:48.390924  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:37:48.390930  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:37:48.391063  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:37:48.421869  536580 cri.go:89] found id: ""
	I1213 11:37:48.421889  536580 logs.go:282] 0 containers: []
	W1213 11:37:48.421898  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:37:48.421904  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:37:48.421958  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:37:48.453188  536580 cri.go:89] found id: ""
	I1213 11:37:48.453211  536580 logs.go:282] 0 containers: []
	W1213 11:37:48.453220  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:37:48.453226  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:37:48.453287  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:37:48.482021  536580 cri.go:89] found id: ""
	I1213 11:37:48.482043  536580 logs.go:282] 0 containers: []
	W1213 11:37:48.482055  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:37:48.482062  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:37:48.482117  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:37:48.514165  536580 cri.go:89] found id: ""
	I1213 11:37:48.514187  536580 logs.go:282] 0 containers: []
	W1213 11:37:48.514196  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:37:48.514203  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:37:48.514260  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:37:48.542611  536580 cri.go:89] found id: ""
	I1213 11:37:48.542632  536580 logs.go:282] 0 containers: []
	W1213 11:37:48.542640  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:37:48.542646  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:37:48.542706  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:37:48.579361  536580 cri.go:89] found id: ""
	I1213 11:37:48.579385  536580 logs.go:282] 0 containers: []
	W1213 11:37:48.579394  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:37:48.579403  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:37:48.579415  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:37:48.658065  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:37:48.658146  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:37:48.675822  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:37:48.675851  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:37:48.787073  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:37:48.787132  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:37:48.787169  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:37:48.859289  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:37:48.859331  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:37:51.394080  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:37:51.404122  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:37:51.404189  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:37:51.433635  536580 cri.go:89] found id: ""
	I1213 11:37:51.433658  536580 logs.go:282] 0 containers: []
	W1213 11:37:51.433666  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:37:51.433673  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:37:51.433728  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:37:51.461198  536580 cri.go:89] found id: ""
	I1213 11:37:51.461223  536580 logs.go:282] 0 containers: []
	W1213 11:37:51.461233  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:37:51.461239  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:37:51.461299  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:37:51.486841  536580 cri.go:89] found id: ""
	I1213 11:37:51.486870  536580 logs.go:282] 0 containers: []
	W1213 11:37:51.486879  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:37:51.486889  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:37:51.486953  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:37:51.512151  536580 cri.go:89] found id: ""
	I1213 11:37:51.512175  536580 logs.go:282] 0 containers: []
	W1213 11:37:51.512183  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:37:51.512190  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:37:51.512248  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:37:51.538448  536580 cri.go:89] found id: ""
	I1213 11:37:51.538470  536580 logs.go:282] 0 containers: []
	W1213 11:37:51.538479  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:37:51.538485  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:37:51.538540  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:37:51.563676  536580 cri.go:89] found id: ""
	I1213 11:37:51.563702  536580 logs.go:282] 0 containers: []
	W1213 11:37:51.563712  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:37:51.563718  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:37:51.563776  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:37:51.594330  536580 cri.go:89] found id: ""
	I1213 11:37:51.594356  536580 logs.go:282] 0 containers: []
	W1213 11:37:51.594365  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:37:51.594372  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:37:51.594431  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:37:51.620250  536580 cri.go:89] found id: ""
	I1213 11:37:51.620276  536580 logs.go:282] 0 containers: []
	W1213 11:37:51.620286  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:37:51.620301  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:37:51.620312  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:37:51.652101  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:37:51.652132  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:37:51.718813  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:37:51.718849  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:37:51.735221  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:37:51.735250  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:37:51.813094  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:37:51.813118  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:37:51.813132  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 11:37:54.349803  536580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:37:54.360267  536580 kubeadm.go:602] duration metric: took 4m1.832679892s to restartPrimaryControlPlane
	W1213 11:37:54.360345  536580 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1213 11:37:54.360409  536580 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 11:37:54.780617  536580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:37:54.793931  536580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:37:54.801941  536580 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:37:54.802004  536580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:37:54.810056  536580 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:37:54.810073  536580 kubeadm.go:158] found existing configuration files:
	
	I1213 11:37:54.810125  536580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:37:54.818329  536580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:37:54.818403  536580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:37:54.825989  536580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:37:54.833717  536580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:37:54.833795  536580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:37:54.841030  536580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:37:54.848965  536580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:37:54.849061  536580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:37:54.856748  536580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:37:54.864720  536580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:37:54.864783  536580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:37:54.872224  536580 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:37:54.912579  536580 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:37:54.912641  536580 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:37:54.986669  536580 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:37:54.986747  536580 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:37:54.986787  536580 kubeadm.go:319] OS: Linux
	I1213 11:37:54.986836  536580 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:37:54.986889  536580 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:37:54.986946  536580 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:37:54.986999  536580 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:37:54.987053  536580 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:37:54.987105  536580 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:37:54.987154  536580 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:37:54.987203  536580 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:37:54.987253  536580 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:37:55.058067  536580 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:37:55.058200  536580 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:37:55.058305  536580 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:37:55.069065  536580 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:37:55.073384  536580 out.go:252]   - Generating certificates and keys ...
	I1213 11:37:55.073498  536580 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:37:55.073593  536580 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:37:55.073684  536580 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 11:37:55.073758  536580 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 11:37:55.073846  536580 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 11:37:55.073915  536580 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 11:37:55.073994  536580 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 11:37:55.074078  536580 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 11:37:55.074166  536580 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 11:37:55.074255  536580 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 11:37:55.074309  536580 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 11:37:55.074379  536580 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:37:55.218961  536580 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:37:55.586788  536580 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:37:56.294176  536580 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:37:56.612701  536580 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:37:57.006736  536580 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:37:57.007794  536580 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:37:57.012007  536580 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:37:57.015347  536580 out.go:252]   - Booting up control plane ...
	I1213 11:37:57.015455  536580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:37:57.015551  536580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:37:57.016282  536580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:37:57.030979  536580 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:37:57.031088  536580 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:37:57.038566  536580 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:37:57.038957  536580 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:37:57.039005  536580 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:37:57.170509  536580 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:37:57.170718  536580 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:41:57.170363  536580 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000204465s
	I1213 11:41:57.170403  536580 kubeadm.go:319] 
	I1213 11:41:57.170463  536580 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:41:57.170502  536580 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:41:57.170622  536580 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:41:57.170632  536580 kubeadm.go:319] 
	I1213 11:41:57.170787  536580 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:41:57.170874  536580 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:41:57.170916  536580 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:41:57.170923  536580 kubeadm.go:319] 
	I1213 11:41:57.174246  536580 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:41:57.174646  536580 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:41:57.174753  536580 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:41:57.175003  536580 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 11:41:57.175014  536580 kubeadm.go:319] 
	I1213 11:41:57.175079  536580 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 11:41:57.175190  536580 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000204465s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000204465s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 11:41:57.175275  536580 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 11:41:57.599726  536580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:41:57.612780  536580 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:41:57.612851  536580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:41:57.620790  536580 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:41:57.620812  536580 kubeadm.go:158] found existing configuration files:
	
	I1213 11:41:57.620862  536580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:41:57.628681  536580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:41:57.628749  536580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:41:57.636661  536580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:41:57.644798  536580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:41:57.644869  536580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:41:57.652289  536580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:41:57.660156  536580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:41:57.660221  536580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:41:57.668074  536580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:41:57.676136  536580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:41:57.676259  536580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:41:57.683770  536580 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:41:57.722158  536580 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:41:57.722551  536580 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:41:57.790034  536580 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:41:57.790110  536580 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:41:57.790150  536580 kubeadm.go:319] OS: Linux
	I1213 11:41:57.790197  536580 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:41:57.790248  536580 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:41:57.790297  536580 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:41:57.790347  536580 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:41:57.790396  536580 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:41:57.790457  536580 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:41:57.790504  536580 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:41:57.790554  536580 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:41:57.790601  536580 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:41:57.855447  536580 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:41:57.855626  536580 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:41:57.855749  536580 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:41:57.868113  536580 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:41:57.873830  536580 out.go:252]   - Generating certificates and keys ...
	I1213 11:41:57.873985  536580 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:41:57.874086  536580 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:41:57.874208  536580 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 11:41:57.874309  536580 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 11:41:57.874427  536580 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 11:41:57.874522  536580 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 11:41:57.874634  536580 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 11:41:57.874729  536580 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 11:41:57.874846  536580 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 11:41:57.874953  536580 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 11:41:57.875023  536580 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 11:41:57.875118  536580 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:41:58.335554  536580 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:41:58.495138  536580 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:41:58.840348  536580 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:41:59.107872  536580 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:41:59.167909  536580 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:41:59.168009  536580 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:41:59.169615  536580 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:41:59.172894  536580 out.go:252]   - Booting up control plane ...
	I1213 11:41:59.172996  536580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:41:59.173075  536580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:41:59.173852  536580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:41:59.201607  536580 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:41:59.201711  536580 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:41:59.211398  536580 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:41:59.212016  536580 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:41:59.212063  536580 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:41:59.400000  536580 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:41:59.400120  536580 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:45:59.400955  536580 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00107176s
	I1213 11:45:59.401256  536580 kubeadm.go:319] 
	I1213 11:45:59.401349  536580 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:45:59.401387  536580 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:45:59.401498  536580 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:45:59.401505  536580 kubeadm.go:319] 
	I1213 11:45:59.401616  536580 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:45:59.401646  536580 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:45:59.401676  536580 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:45:59.401680  536580 kubeadm.go:319] 
	I1213 11:45:59.406417  536580 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:45:59.406890  536580 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:45:59.407014  536580 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:45:59.407332  536580 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 11:45:59.407339  536580 kubeadm.go:319] 
	I1213 11:45:59.407416  536580 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:45:59.407478  536580 kubeadm.go:403] duration metric: took 12m6.919035582s to StartCluster
	I1213 11:45:59.407566  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:45:59.407642  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:45:59.452555  536580 cri.go:89] found id: ""
	I1213 11:45:59.452580  536580 logs.go:282] 0 containers: []
	W1213 11:45:59.452588  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:45:59.452601  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:45:59.452667  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:45:59.488827  536580 cri.go:89] found id: ""
	I1213 11:45:59.488876  536580 logs.go:282] 0 containers: []
	W1213 11:45:59.488885  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:45:59.488891  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:45:59.488959  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:45:59.525124  536580 cri.go:89] found id: ""
	I1213 11:45:59.525150  536580 logs.go:282] 0 containers: []
	W1213 11:45:59.525162  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:45:59.525172  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:45:59.525256  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:45:59.576323  536580 cri.go:89] found id: ""
	I1213 11:45:59.576350  536580 logs.go:282] 0 containers: []
	W1213 11:45:59.576359  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:45:59.576365  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:45:59.576426  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:45:59.606984  536580 cri.go:89] found id: ""
	I1213 11:45:59.607005  536580 logs.go:282] 0 containers: []
	W1213 11:45:59.607013  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:45:59.607028  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:45:59.607098  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:45:59.639869  536580 cri.go:89] found id: ""
	I1213 11:45:59.639955  536580 logs.go:282] 0 containers: []
	W1213 11:45:59.639978  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:45:59.639998  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:45:59.640126  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:45:59.678214  536580 cri.go:89] found id: ""
	I1213 11:45:59.678238  536580 logs.go:282] 0 containers: []
	W1213 11:45:59.678247  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:45:59.678253  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:45:59.678314  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:45:59.708923  536580 cri.go:89] found id: ""
	I1213 11:45:59.708998  536580 logs.go:282] 0 containers: []
	W1213 11:45:59.709022  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:45:59.709045  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:45:59.709088  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:45:59.750287  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:45:59.750313  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:45:59.835491  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:45:59.835584  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:45:59.855059  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:45:59.855086  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:45:59.967783  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:45:59.967857  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:45:59.967884  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 11:46:00.010919  536580 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00107176s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 11:46:00.011063  536580 out.go:285] * 
	* 
	W1213 11:46:00.011345  536580 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00107176s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00107176s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:46:00.011400  536580 out.go:285] * 
	* 
	W1213 11:46:00.013643  536580 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:46:00.058849  536580 out.go:203] 
	W1213 11:46:00.061096  536580 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00107176s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00107176s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:46:00.061433  536580 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 11:46:00.061582  536580 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 11:46:00.084274  536580 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-854588 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-854588 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-854588 version --output=json: exit status 1 (169.295974ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-13 11:46:01.23291933 +0000 UTC m=+4862.077828066
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-854588
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-854588:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "985c7924aa54e6c36562015cd920704febe11b9e6f0c93377a6a243e921c5d2b",
	        "Created": "2025-12-13T11:33:01.242999189Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 536703,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:33:40.642580982Z",
	            "FinishedAt": "2025-12-13T11:33:39.463812412Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/985c7924aa54e6c36562015cd920704febe11b9e6f0c93377a6a243e921c5d2b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/985c7924aa54e6c36562015cd920704febe11b9e6f0c93377a6a243e921c5d2b/hostname",
	        "HostsPath": "/var/lib/docker/containers/985c7924aa54e6c36562015cd920704febe11b9e6f0c93377a6a243e921c5d2b/hosts",
	        "LogPath": "/var/lib/docker/containers/985c7924aa54e6c36562015cd920704febe11b9e6f0c93377a6a243e921c5d2b/985c7924aa54e6c36562015cd920704febe11b9e6f0c93377a6a243e921c5d2b-json.log",
	        "Name": "/kubernetes-upgrade-854588",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-854588:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-854588",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "985c7924aa54e6c36562015cd920704febe11b9e6f0c93377a6a243e921c5d2b",
	                "LowerDir": "/var/lib/docker/overlay2/3710566c60a581bb21d717c690528b350462827ac4401f027028a1d7222c561e-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3710566c60a581bb21d717c690528b350462827ac4401f027028a1d7222c561e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3710566c60a581bb21d717c690528b350462827ac4401f027028a1d7222c561e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3710566c60a581bb21d717c690528b350462827ac4401f027028a1d7222c561e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-854588",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-854588/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-854588",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-854588",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-854588",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6d46bb0f8e4f365b383466b5196c088aed41a70226cf8e25097ee9d58b9d3ece",
	            "SandboxKey": "/var/run/docker/netns/6d46bb0f8e4f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33388"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33389"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33392"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33390"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33391"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-854588": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:e7:27:a1:37:88",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6da232094523e1187f9e51afab3010fdbcdc51cb6ca87b95bd59d1fe7203b151",
	                    "EndpointID": "0cca15071b62a23a6e5dbd1b3cc272a20fb01ccdb8f16ad267b773a03771d8c2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-854588",
	                        "985c7924aa54"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-854588 -n kubernetes-upgrade-854588
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-854588 -n kubernetes-upgrade-854588: exit status 2 (380.569597ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-854588 logs -n 25
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-062409 sudo systemctl status kubelet --all --full --no-pager                                     │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo systemctl cat kubelet --no-pager                                                     │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo systemctl status docker --all --full --no-pager                                      │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo systemctl cat docker --no-pager                                                      │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo cat /etc/docker/daemon.json                                                          │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo docker system info                                                                   │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo cri-dockerd --version                                                                │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo systemctl cat containerd --no-pager                                                  │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo cat /etc/containerd/config.toml                                                      │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo containerd config dump                                                               │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo systemctl status crio --all --full --no-pager                                        │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo systemctl cat crio --no-pager                                                        │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo crio config                                                                          │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ delete  │ -p cilium-062409                                                                                           │ cilium-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │ 13 Dec 25 11:45 UTC │
	│ start   │ -p force-systemd-env-181508 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-181508 │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:45:43
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:45:43.079925  574590 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:45:43.080092  574590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:45:43.080123  574590 out.go:374] Setting ErrFile to fd 2...
	I1213 11:45:43.080145  574590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:45:43.080435  574590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:45:43.080878  574590 out.go:368] Setting JSON to false
	I1213 11:45:43.081799  574590 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12495,"bootTime":1765613848,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:45:43.081901  574590 start.go:143] virtualization:  
	I1213 11:45:43.085592  574590 out.go:179] * [force-systemd-env-181508] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:45:43.088701  574590 notify.go:221] Checking for updates...
	I1213 11:45:43.089564  574590 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:45:43.092689  574590 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:45:43.095786  574590 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:45:43.098752  574590 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:45:43.101755  574590 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:45:43.104706  574590 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1213 11:45:43.108142  574590 config.go:182] Loaded profile config "kubernetes-upgrade-854588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:45:43.108289  574590 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:45:43.130183  574590 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:45:43.130306  574590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:45:43.190608  574590 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:45:43.180844968 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:45:43.190725  574590 docker.go:319] overlay module found
	I1213 11:45:43.193934  574590 out.go:179] * Using the docker driver based on user configuration
	I1213 11:45:43.196756  574590 start.go:309] selected driver: docker
	I1213 11:45:43.196782  574590 start.go:927] validating driver "docker" against <nil>
	I1213 11:45:43.196810  574590 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:45:43.197560  574590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:45:43.257974  574590 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:45:43.248474727 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:45:43.258129  574590 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 11:45:43.258341  574590 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 11:45:43.261371  574590 out.go:179] * Using Docker driver with root privileges
	I1213 11:45:43.264121  574590 cni.go:84] Creating CNI manager for ""
	I1213 11:45:43.264190  574590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:45:43.264203  574590 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 11:45:43.264292  574590 start.go:353] cluster config:
	{Name:force-systemd-env-181508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-181508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:45:43.267383  574590 out.go:179] * Starting "force-systemd-env-181508" primary control-plane node in "force-systemd-env-181508" cluster
	I1213 11:45:43.270440  574590 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:45:43.273354  574590 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:45:43.276191  574590 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 11:45:43.276245  574590 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 11:45:43.276257  574590 cache.go:65] Caching tarball of preloaded images
	I1213 11:45:43.276345  574590 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 11:45:43.276355  574590 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 11:45:43.276463  574590 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/config.json ...
	I1213 11:45:43.276482  574590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/config.json: {Name:mkbe6881f4030413c20a1546e86e1e83343cd19e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:45:43.276635  574590 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:45:43.298646  574590 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:45:43.298666  574590 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:45:43.298680  574590 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:45:43.298710  574590 start.go:360] acquireMachinesLock for force-systemd-env-181508: {Name:mk7f60957ee775921927a5f93589f3701e81ef51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:45:43.298804  574590 start.go:364] duration metric: took 78.966µs to acquireMachinesLock for "force-systemd-env-181508"
	I1213 11:45:43.298829  574590 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-181508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-181508 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:45:43.298894  574590 start.go:125] createHost starting for "" (driver="docker")
	I1213 11:45:43.302263  574590 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 11:45:43.302494  574590 start.go:159] libmachine.API.Create for "force-systemd-env-181508" (driver="docker")
	I1213 11:45:43.302525  574590 client.go:173] LocalClient.Create starting
	I1213 11:45:43.302602  574590 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem
	I1213 11:45:43.302637  574590 main.go:143] libmachine: Decoding PEM data...
	I1213 11:45:43.302658  574590 main.go:143] libmachine: Parsing certificate...
	I1213 11:45:43.302706  574590 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem
	I1213 11:45:43.302726  574590 main.go:143] libmachine: Decoding PEM data...
	I1213 11:45:43.302738  574590 main.go:143] libmachine: Parsing certificate...
	I1213 11:45:43.303077  574590 cli_runner.go:164] Run: docker network inspect force-systemd-env-181508 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 11:45:43.318743  574590 cli_runner.go:211] docker network inspect force-systemd-env-181508 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 11:45:43.318816  574590 network_create.go:284] running [docker network inspect force-systemd-env-181508] to gather additional debugging logs...
	I1213 11:45:43.318833  574590 cli_runner.go:164] Run: docker network inspect force-systemd-env-181508
	W1213 11:45:43.339045  574590 cli_runner.go:211] docker network inspect force-systemd-env-181508 returned with exit code 1
	I1213 11:45:43.339080  574590 network_create.go:287] error running [docker network inspect force-systemd-env-181508]: docker network inspect force-systemd-env-181508: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-181508 not found
	I1213 11:45:43.339095  574590 network_create.go:289] output of [docker network inspect force-systemd-env-181508]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-181508 not found
	
	** /stderr **
	I1213 11:45:43.339193  574590 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:45:43.360877  574590 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0545902499c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:32:4c:cb:8d:7b} reservation:<nil>}
	I1213 11:45:43.361372  574590 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-de5fe2fbe3b8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:54:47:7f:e7:3a} reservation:<nil>}
	I1213 11:45:43.361695  574590 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7c96683190e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:0a:60:46:c5:4a} reservation:<nil>}
	I1213 11:45:43.362027  574590 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6da232094523 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ee:9a:a2:2c:58:26} reservation:<nil>}
	I1213 11:45:43.362575  574590 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cbb50}
	I1213 11:45:43.362626  574590 network_create.go:124] attempt to create docker network force-systemd-env-181508 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 11:45:43.362688  574590 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-181508 force-systemd-env-181508
	I1213 11:45:43.425357  574590 network_create.go:108] docker network force-systemd-env-181508 192.168.85.0/24 created
	I1213 11:45:43.425393  574590 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-181508" container
	I1213 11:45:43.425486  574590 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 11:45:43.443334  574590 cli_runner.go:164] Run: docker volume create force-systemd-env-181508 --label name.minikube.sigs.k8s.io=force-systemd-env-181508 --label created_by.minikube.sigs.k8s.io=true
	I1213 11:45:43.462000  574590 oci.go:103] Successfully created a docker volume force-systemd-env-181508
	I1213 11:45:43.462102  574590 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-181508-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-181508 --entrypoint /usr/bin/test -v force-systemd-env-181508:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 11:45:44.001438  574590 oci.go:107] Successfully prepared a docker volume force-systemd-env-181508
	I1213 11:45:44.001508  574590 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 11:45:44.001519  574590 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 11:45:44.001615  574590 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-181508:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 11:45:48.232159  574590 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-181508:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.230486908s)
	I1213 11:45:48.232191  574590 kic.go:203] duration metric: took 4.230668332s to extract preloaded images to volume ...
	W1213 11:45:48.232355  574590 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 11:45:48.232476  574590 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 11:45:48.297515  574590 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-181508 --name force-systemd-env-181508 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-181508 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-181508 --network force-systemd-env-181508 --ip 192.168.85.2 --volume force-systemd-env-181508:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 11:45:48.615486  574590 cli_runner.go:164] Run: docker container inspect force-systemd-env-181508 --format={{.State.Running}}
	I1213 11:45:48.639100  574590 cli_runner.go:164] Run: docker container inspect force-systemd-env-181508 --format={{.State.Status}}
	I1213 11:45:48.664203  574590 cli_runner.go:164] Run: docker exec force-systemd-env-181508 stat /var/lib/dpkg/alternatives/iptables
	I1213 11:45:48.717483  574590 oci.go:144] the created container "force-systemd-env-181508" has a running status.
	I1213 11:45:48.717520  574590 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/force-systemd-env-181508/id_rsa...
	I1213 11:45:48.910998  574590 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/force-systemd-env-181508/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1213 11:45:48.911045  574590 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-354468/.minikube/machines/force-systemd-env-181508/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 11:45:48.951419  574590 cli_runner.go:164] Run: docker container inspect force-systemd-env-181508 --format={{.State.Status}}
	I1213 11:45:48.984152  574590 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 11:45:48.984179  574590 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-181508 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 11:45:49.038722  574590 cli_runner.go:164] Run: docker container inspect force-systemd-env-181508 --format={{.State.Status}}
	I1213 11:45:49.066820  574590 machine.go:94] provisionDockerMachine start ...
	I1213 11:45:49.066919  574590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-181508
	I1213 11:45:49.094962  574590 main.go:143] libmachine: Using SSH client type: native
	I1213 11:45:49.095324  574590 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1213 11:45:49.095341  574590 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:45:49.096179  574590 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 11:45:52.251575  574590 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-181508
	
	I1213 11:45:52.251601  574590 ubuntu.go:182] provisioning hostname "force-systemd-env-181508"
	I1213 11:45:52.251662  574590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-181508
	I1213 11:45:52.270441  574590 main.go:143] libmachine: Using SSH client type: native
	I1213 11:45:52.270763  574590 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1213 11:45:52.270774  574590 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-181508 && echo "force-systemd-env-181508" | sudo tee /etc/hostname
	I1213 11:45:52.437617  574590 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-env-181508
	
	I1213 11:45:52.437698  574590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-181508
	I1213 11:45:52.455988  574590 main.go:143] libmachine: Using SSH client type: native
	I1213 11:45:52.456301  574590 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1213 11:45:52.456323  574590 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-181508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-181508/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-181508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:45:52.608264  574590 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:45:52.608357  574590 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 11:45:52.608401  574590 ubuntu.go:190] setting up certificates
	I1213 11:45:52.608436  574590 provision.go:84] configureAuth start
	I1213 11:45:52.608521  574590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-181508
	I1213 11:45:52.627122  574590 provision.go:143] copyHostCerts
	I1213 11:45:52.627162  574590 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 11:45:52.627199  574590 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 11:45:52.627214  574590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 11:45:52.627394  574590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 11:45:52.627542  574590 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 11:45:52.627563  574590 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 11:45:52.627568  574590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 11:45:52.627603  574590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 11:45:52.627657  574590 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 11:45:52.627673  574590 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 11:45:52.627683  574590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 11:45:52.627708  574590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 11:45:52.627764  574590 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-181508 san=[127.0.0.1 192.168.85.2 force-systemd-env-181508 localhost minikube]
	I1213 11:45:52.745389  574590 provision.go:177] copyRemoteCerts
	I1213 11:45:52.745460  574590 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:45:52.745508  574590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-181508
	I1213 11:45:52.765084  574590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/force-systemd-env-181508/id_rsa Username:docker}
	I1213 11:45:52.871564  574590 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1213 11:45:52.871624  574590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:45:52.889436  574590 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1213 11:45:52.889502  574590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1213 11:45:52.907407  574590 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1213 11:45:52.907467  574590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:45:52.925108  574590 provision.go:87] duration metric: took 316.643727ms to configureAuth
	I1213 11:45:52.925136  574590 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:45:52.925323  574590 config.go:182] Loaded profile config "force-systemd-env-181508": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:45:52.925439  574590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-181508
	I1213 11:45:52.943417  574590 main.go:143] libmachine: Using SSH client type: native
	I1213 11:45:52.943750  574590 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1213 11:45:52.943774  574590 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:45:53.252175  574590 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:45:53.252201  574590 machine.go:97] duration metric: took 4.18535699s to provisionDockerMachine
	I1213 11:45:53.252212  574590 client.go:176] duration metric: took 9.949682253s to LocalClient.Create
	I1213 11:45:53.252227  574590 start.go:167] duration metric: took 9.949735636s to libmachine.API.Create "force-systemd-env-181508"
	I1213 11:45:53.252235  574590 start.go:293] postStartSetup for "force-systemd-env-181508" (driver="docker")
	I1213 11:45:53.252246  574590 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:45:53.252342  574590 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:45:53.252386  574590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-181508
	I1213 11:45:53.273311  574590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/force-systemd-env-181508/id_rsa Username:docker}
	I1213 11:45:53.379704  574590 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:45:53.383034  574590 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:45:53.383067  574590 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:45:53.383079  574590 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 11:45:53.383131  574590 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 11:45:53.383225  574590 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 11:45:53.383237  574590 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> /etc/ssl/certs/3563282.pem
	I1213 11:45:53.383343  574590 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:45:53.392098  574590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:45:53.411559  574590 start.go:296] duration metric: took 159.307876ms for postStartSetup
	I1213 11:45:53.411967  574590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-181508
	I1213 11:45:53.428558  574590 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/config.json ...
	I1213 11:45:53.428828  574590 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:45:53.428878  574590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-181508
	I1213 11:45:53.446666  574590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/force-systemd-env-181508/id_rsa Username:docker}
	I1213 11:45:53.548475  574590 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:45:53.552970  574590 start.go:128] duration metric: took 10.254062589s to createHost
	I1213 11:45:53.553033  574590 start.go:83] releasing machines lock for "force-systemd-env-181508", held for 10.254219652s
	I1213 11:45:53.553136  574590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-181508
	I1213 11:45:53.570461  574590 ssh_runner.go:195] Run: cat /version.json
	I1213 11:45:53.570510  574590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-181508
	I1213 11:45:53.570754  574590 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:45:53.570826  574590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-181508
	I1213 11:45:53.587108  574590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/force-systemd-env-181508/id_rsa Username:docker}
	I1213 11:45:53.604543  574590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/force-systemd-env-181508/id_rsa Username:docker}
	I1213 11:45:53.691149  574590 ssh_runner.go:195] Run: systemctl --version
	I1213 11:45:53.784650  574590 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:45:53.836444  574590 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:45:53.840767  574590 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:45:53.840836  574590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:45:53.871320  574590 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 11:45:53.871345  574590 start.go:496] detecting cgroup driver to use...
	I1213 11:45:53.871363  574590 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1213 11:45:53.871428  574590 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:45:53.889645  574590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:45:53.902592  574590 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:45:53.902672  574590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:45:53.920434  574590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:45:53.939125  574590 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:45:54.063736  574590 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:45:54.191883  574590 docker.go:234] disabling docker service ...
	I1213 11:45:54.191951  574590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:45:54.213481  574590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:45:54.226643  574590 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:45:54.345889  574590 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:45:54.467295  574590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:45:54.481025  574590 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:45:54.495299  574590 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 11:45:54.495414  574590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:45:54.504295  574590 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 11:45:54.504412  574590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:45:54.513939  574590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:45:54.523285  574590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:45:54.537207  574590 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:45:54.548997  574590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:45:54.560558  574590 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:45:54.577637  574590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:45:54.587193  574590 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:45:54.597665  574590 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:45:54.604877  574590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:45:54.716094  574590 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:45:54.871553  574590 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:45:54.871656  574590 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:45:54.875635  574590 start.go:564] Will wait 60s for crictl version
	I1213 11:45:54.875795  574590 ssh_runner.go:195] Run: which crictl
	I1213 11:45:54.879676  574590 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:45:54.909548  574590 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:45:54.909684  574590 ssh_runner.go:195] Run: crio --version
	I1213 11:45:54.936838  574590 ssh_runner.go:195] Run: crio --version
	I1213 11:45:54.971788  574590 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 11:45:54.974605  574590 cli_runner.go:164] Run: docker network inspect force-systemd-env-181508 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:45:54.990681  574590 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 11:45:54.994359  574590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:45:55.005933  574590 kubeadm.go:884] updating cluster {Name:force-systemd-env-181508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-181508 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:45:55.007269  574590 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 11:45:55.007417  574590 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:45:55.044999  574590 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:45:55.045033  574590 crio.go:433] Images already preloaded, skipping extraction
	I1213 11:45:55.045121  574590 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:45:55.074631  574590 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:45:55.074656  574590 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:45:55.074665  574590 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1213 11:45:55.074749  574590 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-181508 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-181508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:45:55.074836  574590 ssh_runner.go:195] Run: crio config
	I1213 11:45:55.128329  574590 cni.go:84] Creating CNI manager for ""
	I1213 11:45:55.128350  574590 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:45:55.128368  574590 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:45:55.128391  574590 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-181508 NodeName:force-systemd-env-181508 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:45:55.128521  574590 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-181508"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:45:55.128600  574590 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 11:45:55.136500  574590 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:45:55.136623  574590 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:45:55.144200  574590 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 11:45:55.157079  574590 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:45:55.171863  574590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1213 11:45:55.185422  574590 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:45:55.188866  574590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:45:55.198390  574590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:45:55.314005  574590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:45:55.335631  574590 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508 for IP: 192.168.85.2
	I1213 11:45:55.335655  574590 certs.go:195] generating shared ca certs ...
	I1213 11:45:55.335674  574590 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:45:55.335809  574590 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:45:55.335860  574590 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:45:55.335873  574590 certs.go:257] generating profile certs ...
	I1213 11:45:55.335927  574590 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/client.key
	I1213 11:45:55.335945  574590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/client.crt with IP's: []
	I1213 11:45:55.458058  574590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/client.crt ...
	I1213 11:45:55.458090  574590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/client.crt: {Name:mk33b574f22bdf8cdc0e1712e2df43a27447c16a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:45:55.458287  574590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/client.key ...
	I1213 11:45:55.458302  574590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/client.key: {Name:mk67ce216bb31083a7a02ba4095c991cedc72eb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:45:55.458396  574590 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/apiserver.key.acb698fd
	I1213 11:45:55.458414  574590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/apiserver.crt.acb698fd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 11:45:55.551809  574590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/apiserver.crt.acb698fd ...
	I1213 11:45:55.551847  574590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/apiserver.crt.acb698fd: {Name:mk9b89a3fc07e72d8d655a46f6444e45ae7253e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:45:55.552029  574590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/apiserver.key.acb698fd ...
	I1213 11:45:55.552045  574590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/apiserver.key.acb698fd: {Name:mkf9818c0d7edbb944de25b0cc99656523d0d7f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:45:55.552129  574590 certs.go:382] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/apiserver.crt.acb698fd -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/apiserver.crt
	I1213 11:45:55.552214  574590 certs.go:386] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/apiserver.key.acb698fd -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/apiserver.key
	I1213 11:45:55.552276  574590 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/proxy-client.key
	I1213 11:45:55.552294  574590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/proxy-client.crt with IP's: []
	I1213 11:45:55.633192  574590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/proxy-client.crt ...
	I1213 11:45:55.633224  574590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/proxy-client.crt: {Name:mk2ab7211868f19cac92e7a94443ac44cf2e67b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:45:55.633394  574590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/proxy-client.key ...
	I1213 11:45:55.633409  574590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/proxy-client.key: {Name:mk6f6423006f59925d22c13fa3cd24f3157ee0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:45:55.633484  574590 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1213 11:45:55.633506  574590 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1213 11:45:55.633520  574590 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1213 11:45:55.633535  574590 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1213 11:45:55.633547  574590 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1213 11:45:55.633563  574590 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1213 11:45:55.633607  574590 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1213 11:45:55.633626  574590 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1213 11:45:55.633679  574590 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:45:55.633726  574590 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:45:55.633739  574590 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:45:55.633767  574590 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:45:55.633797  574590 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:45:55.633826  574590 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:45:55.633876  574590 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:45:55.633911  574590 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem -> /usr/share/ca-certificates/356328.pem
	I1213 11:45:55.633928  574590 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> /usr/share/ca-certificates/3563282.pem
	I1213 11:45:55.633951  574590 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:45:55.634463  574590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:45:55.652936  574590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:45:55.670799  574590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:45:55.689028  574590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:45:55.709229  574590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1213 11:45:55.728002  574590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:45:55.746224  574590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:45:55.764972  574590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/force-systemd-env-181508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:45:55.781844  574590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:45:55.799076  574590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:45:55.816474  574590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:45:55.833662  574590 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:45:55.845765  574590 ssh_runner.go:195] Run: openssl version
	I1213 11:45:55.852208  574590 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:45:55.859455  574590 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:45:55.866915  574590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:45:55.870807  574590 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:45:55.870871  574590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:45:55.911673  574590 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:45:55.918873  574590 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 11:45:55.926107  574590 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:45:55.933317  574590 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:45:55.940419  574590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:45:55.944163  574590 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:45:55.944223  574590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:45:55.984697  574590 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:45:55.991825  574590 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/356328.pem /etc/ssl/certs/51391683.0
	I1213 11:45:55.998751  574590 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:45:56.007812  574590 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:45:56.015940  574590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:45:56.020094  574590 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:45:56.020186  574590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:45:56.062610  574590 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:45:56.070199  574590 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3563282.pem /etc/ssl/certs/3ec20f2e.0
	I1213 11:45:56.078302  574590 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:45:56.082565  574590 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 11:45:56.082637  574590 kubeadm.go:401] StartCluster: {Name:force-systemd-env-181508 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-env-181508 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:45:56.082746  574590 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:45:56.082827  574590 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:45:56.110169  574590 cri.go:89] found id: ""
	I1213 11:45:56.110240  574590 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:45:56.118257  574590 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:45:56.126232  574590 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:45:56.126328  574590 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:45:56.134325  574590 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:45:56.134346  574590 kubeadm.go:158] found existing configuration files:
	
	I1213 11:45:56.134403  574590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:45:56.141968  574590 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:45:56.142082  574590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:45:56.149149  574590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:45:56.156711  574590 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:45:56.156775  574590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:45:56.163806  574590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:45:56.171289  574590 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:45:56.171350  574590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:45:56.178370  574590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:45:56.185840  574590 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:45:56.185919  574590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:45:56.193190  574590 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:45:56.231144  574590 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 11:45:56.231238  574590 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:45:56.267882  574590 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:45:56.267959  574590 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:45:56.268004  574590 kubeadm.go:319] OS: Linux
	I1213 11:45:56.268053  574590 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:45:56.268106  574590 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:45:56.268157  574590 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:45:56.268209  574590 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:45:56.268260  574590 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:45:56.268320  574590 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:45:56.268369  574590 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:45:56.268423  574590 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:45:56.268472  574590 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:45:56.343347  574590 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:45:56.343569  574590 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:45:56.343716  574590 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:45:56.351959  574590 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:45:56.357197  574590 out.go:252]   - Generating certificates and keys ...
	I1213 11:45:56.357302  574590 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:45:56.357381  574590 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:45:56.511383  574590 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:45:56.823874  574590 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:45:57.939241  574590 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:45:59.400955  536580 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00107176s
	I1213 11:45:59.401256  536580 kubeadm.go:319] 
	I1213 11:45:59.401349  536580 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:45:59.401387  536580 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:45:59.401498  536580 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:45:59.401505  536580 kubeadm.go:319] 
	I1213 11:45:59.401616  536580 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:45:59.401646  536580 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:45:59.401676  536580 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:45:59.401680  536580 kubeadm.go:319] 
	I1213 11:45:59.406417  536580 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:45:59.406890  536580 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:45:59.407014  536580 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:45:59.407332  536580 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 11:45:59.407339  536580 kubeadm.go:319] 
	I1213 11:45:59.407416  536580 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:45:59.407478  536580 kubeadm.go:403] duration metric: took 12m6.919035582s to StartCluster
	I1213 11:45:59.407566  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 11:45:59.407642  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 11:45:59.452555  536580 cri.go:89] found id: ""
	I1213 11:45:59.452580  536580 logs.go:282] 0 containers: []
	W1213 11:45:59.452588  536580 logs.go:284] No container was found matching "kube-apiserver"
	I1213 11:45:59.452601  536580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 11:45:59.452667  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 11:45:59.488827  536580 cri.go:89] found id: ""
	I1213 11:45:59.488876  536580 logs.go:282] 0 containers: []
	W1213 11:45:59.488885  536580 logs.go:284] No container was found matching "etcd"
	I1213 11:45:59.488891  536580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 11:45:59.488959  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 11:45:59.525124  536580 cri.go:89] found id: ""
	I1213 11:45:59.525150  536580 logs.go:282] 0 containers: []
	W1213 11:45:59.525162  536580 logs.go:284] No container was found matching "coredns"
	I1213 11:45:59.525172  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 11:45:59.525256  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 11:45:59.576323  536580 cri.go:89] found id: ""
	I1213 11:45:59.576350  536580 logs.go:282] 0 containers: []
	W1213 11:45:59.576359  536580 logs.go:284] No container was found matching "kube-scheduler"
	I1213 11:45:59.576365  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 11:45:59.576426  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 11:45:59.606984  536580 cri.go:89] found id: ""
	I1213 11:45:59.607005  536580 logs.go:282] 0 containers: []
	W1213 11:45:59.607013  536580 logs.go:284] No container was found matching "kube-proxy"
	I1213 11:45:59.607028  536580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 11:45:59.607098  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 11:45:59.639869  536580 cri.go:89] found id: ""
	I1213 11:45:59.639955  536580 logs.go:282] 0 containers: []
	W1213 11:45:59.639978  536580 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 11:45:59.639998  536580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 11:45:59.640126  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 11:45:59.678214  536580 cri.go:89] found id: ""
	I1213 11:45:59.678238  536580 logs.go:282] 0 containers: []
	W1213 11:45:59.678247  536580 logs.go:284] No container was found matching "kindnet"
	I1213 11:45:59.678253  536580 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 11:45:59.678314  536580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 11:45:59.708923  536580 cri.go:89] found id: ""
	I1213 11:45:59.708998  536580 logs.go:282] 0 containers: []
	W1213 11:45:59.709022  536580 logs.go:284] No container was found matching "storage-provisioner"
	I1213 11:45:59.709045  536580 logs.go:123] Gathering logs for container status ...
	I1213 11:45:59.709088  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 11:45:59.750287  536580 logs.go:123] Gathering logs for kubelet ...
	I1213 11:45:59.750313  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 11:45:59.835491  536580 logs.go:123] Gathering logs for dmesg ...
	I1213 11:45:59.835584  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 11:45:59.855059  536580 logs.go:123] Gathering logs for describe nodes ...
	I1213 11:45:59.855086  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 11:45:59.967783  536580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 11:45:59.967857  536580 logs.go:123] Gathering logs for CRI-O ...
	I1213 11:45:59.967884  536580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 11:46:00.010919  536580 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00107176s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 11:46:00.011063  536580 out.go:285] * 
	W1213 11:46:00.011345  536580 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00107176s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:46:00.011400  536580 out.go:285] * 
	W1213 11:46:00.013643  536580 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:46:00.058849  536580 out.go:203] 
	W1213 11:46:00.061096  536580 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00107176s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 11:46:00.061433  536580 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 11:46:00.061582  536580 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 11:46:00.084274  536580 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 11:33:47 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:33:47.040941855Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 11:33:47 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:33:47.04097932Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 11:33:47 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:33:47.041027682Z" level=info msg="Create NRI interface"
	Dec 13 11:33:47 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:33:47.041132864Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 11:33:47 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:33:47.041142071Z" level=info msg="runtime interface created"
	Dec 13 11:33:47 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:33:47.041152902Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 11:33:47 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:33:47.041159055Z" level=info msg="runtime interface starting up..."
	Dec 13 11:33:47 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:33:47.041165488Z" level=info msg="starting plugins..."
	Dec 13 11:33:47 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:33:47.041178108Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 11:33:47 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:33:47.041237095Z" level=info msg="No systemd watchdog enabled"
	Dec 13 11:33:47 kubernetes-upgrade-854588 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 11:37:55 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:37:55.063776768Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=1512e635-3faf-4ba0-bf64-195cee71b01e name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:37:55 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:37:55.065153397Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=cd773682-664b-406d-9c2e-9ec567c089fb name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:37:55 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:37:55.065809043Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=69386938-483e-43ca-81fa-a8437d080f6c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:37:55 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:37:55.066339394Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=ea9330d5-b907-4c0a-95f6-c99dc4c6f5a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:37:55 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:37:55.066879732Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=df6cb990-742a-44c1-b5c4-5e14c15af908 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:37:55 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:37:55.067341382Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=b778ab87-3d8d-4bd9-b07e-dd312a61cab5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:37:55 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:37:55.067984957Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=a96056f6-7683-4648-8803-f76e851a9a8a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:41:57 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:41:57.858872634Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=67e6f340-ac04-461f-8617-9af45bddefc8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:41:57 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:41:57.859822149Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=a5f68f7a-6113-45e0-a90e-53621d4c1a1a name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:41:57 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:41:57.860326408Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=e70eb9af-400f-4c6a-a456-335b85d0034d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:41:57 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:41:57.860774774Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=11054199-60aa-4f5c-b056-a5f3e772d17f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:41:57 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:41:57.861189424Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=995763c9-8190-4384-8cf9-65a56eae9105 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:41:57 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:41:57.861590273Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=e12472ed-22a6-4e5e-82cc-2cb85468ef69 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:41:57 kubernetes-upgrade-854588 crio[617]: time="2025-12-13T11:41:57.862046803Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=1c8788cd-8888-4e79-a3b3-f60921818cc0 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +34.539888] overlayfs: idmapped layers are currently not supported
	[Dec13 11:12] overlayfs: idmapped layers are currently not supported
	[Dec13 11:13] overlayfs: idmapped layers are currently not supported
	[  +3.803792] overlayfs: idmapped layers are currently not supported
	[Dec13 11:14] overlayfs: idmapped layers are currently not supported
	[ +27.964028] overlayfs: idmapped layers are currently not supported
	[Dec13 11:16] overlayfs: idmapped layers are currently not supported
	[Dec13 11:20] overlayfs: idmapped layers are currently not supported
	[ +35.182226] overlayfs: idmapped layers are currently not supported
	[Dec13 11:21] overlayfs: idmapped layers are currently not supported
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 11:46:02 up  3:28,  0 user,  load average: 2.92, 2.05, 1.94
	Linux kubernetes-upgrade-854588 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 11:45:59 kubernetes-upgrade-854588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:45:59 kubernetes-upgrade-854588 kubelet[12198]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:45:59 kubernetes-upgrade-854588 kubelet[12198]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:45:59 kubernetes-upgrade-854588 kubelet[12198]: E1213 11:45:59.938390   12198 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:45:59 kubernetes-upgrade-854588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:45:59 kubernetes-upgrade-854588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:46:00 kubernetes-upgrade-854588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 641.
	Dec 13 11:46:00 kubernetes-upgrade-854588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:46:00 kubernetes-upgrade-854588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:46:00 kubernetes-upgrade-854588 kubelet[12215]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:46:00 kubernetes-upgrade-854588 kubelet[12215]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:46:00 kubernetes-upgrade-854588 kubelet[12215]: E1213 11:46:00.945060   12215 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:46:00 kubernetes-upgrade-854588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:46:00 kubernetes-upgrade-854588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:46:01 kubernetes-upgrade-854588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 642.
	Dec 13 11:46:01 kubernetes-upgrade-854588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:46:01 kubernetes-upgrade-854588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:46:01 kubernetes-upgrade-854588 kubelet[12236]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:46:01 kubernetes-upgrade-854588 kubelet[12236]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 11:46:01 kubernetes-upgrade-854588 kubelet[12236]: E1213 11:46:01.856217   12236 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 11:46:01 kubernetes-upgrade-854588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 11:46:01 kubernetes-upgrade-854588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 11:46:02 kubernetes-upgrade-854588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 643.
	Dec 13 11:46:02 kubernetes-upgrade-854588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 11:46:02 kubernetes-upgrade-854588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-854588 -n kubernetes-upgrade-854588
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-854588 -n kubernetes-upgrade-854588: exit status 2 (420.62261ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-854588" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-854588" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-854588
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-854588: (2.459281715s)
--- FAIL: TestKubernetesUpgrade (791.51s)

                                                
                                    
x
+
TestPause/serial/Pause (6.67s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-649359 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-649359 --alsologtostderr -v=5: exit status 80 (2.148470961s)

                                                
                                                
-- stdout --
	* Pausing node pause-649359 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:44:44.971752  569296 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:44:44.972512  569296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:44:44.972526  569296 out.go:374] Setting ErrFile to fd 2...
	I1213 11:44:44.972532  569296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:44:44.973118  569296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:44:44.973875  569296 out.go:368] Setting JSON to false
	I1213 11:44:44.973964  569296 mustload.go:66] Loading cluster: pause-649359
	I1213 11:44:44.974734  569296 config.go:182] Loaded profile config "pause-649359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:44:44.975494  569296 cli_runner.go:164] Run: docker container inspect pause-649359 --format={{.State.Status}}
	I1213 11:44:44.993182  569296 host.go:66] Checking if "pause-649359" exists ...
	I1213 11:44:44.993510  569296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:44:45.109132  569296 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 11:44:45.088956239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:44:45.110120  569296 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765481609-22101/minikube-v1.37.0-1765481609-22101-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765481609-22101-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-649359 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1213 11:44:45.120082  569296 out.go:179] * Pausing node pause-649359 ... 
	I1213 11:44:45.123191  569296 host.go:66] Checking if "pause-649359" exists ...
	I1213 11:44:45.123752  569296 ssh_runner.go:195] Run: systemctl --version
	I1213 11:44:45.123816  569296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-649359
	I1213 11:44:45.155965  569296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/pause-649359/id_rsa Username:docker}
	I1213 11:44:45.282689  569296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:44:45.298508  569296 pause.go:52] kubelet running: true
	I1213 11:44:45.298648  569296 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 11:44:45.523462  569296 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 11:44:45.523593  569296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 11:44:45.604835  569296 cri.go:89] found id: "a8e21604e02771850162284cea6434cc4c66c53872a261c1d6c46f67be07725e"
	I1213 11:44:45.604865  569296 cri.go:89] found id: "1d4978198e9f7a6747ea2655f6b6de52519dd4ed62f82a58b2e2c4e1ef98bbbe"
	I1213 11:44:45.604870  569296 cri.go:89] found id: "77ddb3b808ba3a3c66122b470193d17e950914cc192aeaa1ce537982978c176c"
	I1213 11:44:45.604873  569296 cri.go:89] found id: "44ffe7a6e162c4fee0b1aa729d805e20f9e028840e4af6b9ad5c9a656939c373"
	I1213 11:44:45.604877  569296 cri.go:89] found id: "69ba67493b5e941069b4105a9966d8e72af04aa74f3451c67b59e1792d39f2a7"
	I1213 11:44:45.604880  569296 cri.go:89] found id: "84a10b3ea40e6601774b3e6db2ab95691ab784e51dfcf7098ff258d7a9b6e9c8"
	I1213 11:44:45.604883  569296 cri.go:89] found id: "2000805fb9746ade340bf4b45bcf2b3c8530d52c36d96e3640838f94c9200163"
	I1213 11:44:45.604889  569296 cri.go:89] found id: "e1b0b540fa92caa18713aeeae9ab47a45eaf1e6ff2bb9db8d90bdbafb6ac6006"
	I1213 11:44:45.604897  569296 cri.go:89] found id: "2a7de57d4a05a6dbf2502d22d4bbcaa154c070ec7964da22579d06bac5b9eb78"
	I1213 11:44:45.604905  569296 cri.go:89] found id: "b42074f6e347ec4d0c964e8198f85e6532b0014bb1e60750a212fd0a9a4a62a1"
	I1213 11:44:45.604908  569296 cri.go:89] found id: "9a3d70eeb43c6a48062b267c0c45c9e29ff5980c55cb4522b094c5996bcc7629"
	I1213 11:44:45.604911  569296 cri.go:89] found id: "6c2fc3a72c623073db614eabbbfa9a65bc0404d50c8390c5b66c60b9c9862e42"
	I1213 11:44:45.604914  569296 cri.go:89] found id: "efb2aed1c0f23cd8fd9bd689dbbbcc305a654f6d718d16aded0b188a0f85575e"
	I1213 11:44:45.604918  569296 cri.go:89] found id: "e7c15552f9059fa83530a9bacc61ee1252c8b4a381d93663a6f446492911bbf0"
	I1213 11:44:45.604921  569296 cri.go:89] found id: ""
	I1213 11:44:45.604974  569296 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 11:44:45.615973  569296 retry.go:31] will retry after 132.860747ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:44:45Z" level=error msg="open /run/runc: no such file or directory"
	I1213 11:44:45.749402  569296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:44:45.764594  569296 pause.go:52] kubelet running: false
	I1213 11:44:45.764695  569296 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 11:44:45.916163  569296 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 11:44:45.916260  569296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 11:44:45.985189  569296 cri.go:89] found id: "a8e21604e02771850162284cea6434cc4c66c53872a261c1d6c46f67be07725e"
	I1213 11:44:45.985224  569296 cri.go:89] found id: "1d4978198e9f7a6747ea2655f6b6de52519dd4ed62f82a58b2e2c4e1ef98bbbe"
	I1213 11:44:45.985229  569296 cri.go:89] found id: "77ddb3b808ba3a3c66122b470193d17e950914cc192aeaa1ce537982978c176c"
	I1213 11:44:45.985233  569296 cri.go:89] found id: "44ffe7a6e162c4fee0b1aa729d805e20f9e028840e4af6b9ad5c9a656939c373"
	I1213 11:44:45.985237  569296 cri.go:89] found id: "69ba67493b5e941069b4105a9966d8e72af04aa74f3451c67b59e1792d39f2a7"
	I1213 11:44:45.985240  569296 cri.go:89] found id: "84a10b3ea40e6601774b3e6db2ab95691ab784e51dfcf7098ff258d7a9b6e9c8"
	I1213 11:44:45.985243  569296 cri.go:89] found id: "2000805fb9746ade340bf4b45bcf2b3c8530d52c36d96e3640838f94c9200163"
	I1213 11:44:45.985246  569296 cri.go:89] found id: "e1b0b540fa92caa18713aeeae9ab47a45eaf1e6ff2bb9db8d90bdbafb6ac6006"
	I1213 11:44:45.985250  569296 cri.go:89] found id: "2a7de57d4a05a6dbf2502d22d4bbcaa154c070ec7964da22579d06bac5b9eb78"
	I1213 11:44:45.985256  569296 cri.go:89] found id: "b42074f6e347ec4d0c964e8198f85e6532b0014bb1e60750a212fd0a9a4a62a1"
	I1213 11:44:45.985260  569296 cri.go:89] found id: "9a3d70eeb43c6a48062b267c0c45c9e29ff5980c55cb4522b094c5996bcc7629"
	I1213 11:44:45.985262  569296 cri.go:89] found id: "6c2fc3a72c623073db614eabbbfa9a65bc0404d50c8390c5b66c60b9c9862e42"
	I1213 11:44:45.985265  569296 cri.go:89] found id: "efb2aed1c0f23cd8fd9bd689dbbbcc305a654f6d718d16aded0b188a0f85575e"
	I1213 11:44:45.985268  569296 cri.go:89] found id: "e7c15552f9059fa83530a9bacc61ee1252c8b4a381d93663a6f446492911bbf0"
	I1213 11:44:45.985272  569296 cri.go:89] found id: ""
	I1213 11:44:45.985327  569296 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 11:44:45.996600  569296 retry.go:31] will retry after 239.115104ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:44:45Z" level=error msg="open /run/runc: no such file or directory"
	I1213 11:44:46.236049  569296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:44:46.249171  569296 pause.go:52] kubelet running: false
	I1213 11:44:46.249289  569296 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 11:44:46.422691  569296 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 11:44:46.422788  569296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 11:44:46.491606  569296 cri.go:89] found id: "a8e21604e02771850162284cea6434cc4c66c53872a261c1d6c46f67be07725e"
	I1213 11:44:46.491632  569296 cri.go:89] found id: "1d4978198e9f7a6747ea2655f6b6de52519dd4ed62f82a58b2e2c4e1ef98bbbe"
	I1213 11:44:46.491637  569296 cri.go:89] found id: "77ddb3b808ba3a3c66122b470193d17e950914cc192aeaa1ce537982978c176c"
	I1213 11:44:46.491641  569296 cri.go:89] found id: "44ffe7a6e162c4fee0b1aa729d805e20f9e028840e4af6b9ad5c9a656939c373"
	I1213 11:44:46.491645  569296 cri.go:89] found id: "69ba67493b5e941069b4105a9966d8e72af04aa74f3451c67b59e1792d39f2a7"
	I1213 11:44:46.491648  569296 cri.go:89] found id: "84a10b3ea40e6601774b3e6db2ab95691ab784e51dfcf7098ff258d7a9b6e9c8"
	I1213 11:44:46.491651  569296 cri.go:89] found id: "2000805fb9746ade340bf4b45bcf2b3c8530d52c36d96e3640838f94c9200163"
	I1213 11:44:46.491654  569296 cri.go:89] found id: "e1b0b540fa92caa18713aeeae9ab47a45eaf1e6ff2bb9db8d90bdbafb6ac6006"
	I1213 11:44:46.491685  569296 cri.go:89] found id: "2a7de57d4a05a6dbf2502d22d4bbcaa154c070ec7964da22579d06bac5b9eb78"
	I1213 11:44:46.491699  569296 cri.go:89] found id: "b42074f6e347ec4d0c964e8198f85e6532b0014bb1e60750a212fd0a9a4a62a1"
	I1213 11:44:46.491703  569296 cri.go:89] found id: "9a3d70eeb43c6a48062b267c0c45c9e29ff5980c55cb4522b094c5996bcc7629"
	I1213 11:44:46.491706  569296 cri.go:89] found id: "6c2fc3a72c623073db614eabbbfa9a65bc0404d50c8390c5b66c60b9c9862e42"
	I1213 11:44:46.491710  569296 cri.go:89] found id: "efb2aed1c0f23cd8fd9bd689dbbbcc305a654f6d718d16aded0b188a0f85575e"
	I1213 11:44:46.491716  569296 cri.go:89] found id: "e7c15552f9059fa83530a9bacc61ee1252c8b4a381d93663a6f446492911bbf0"
	I1213 11:44:46.491726  569296 cri.go:89] found id: ""
	I1213 11:44:46.491793  569296 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 11:44:46.503007  569296 retry.go:31] will retry after 304.447665ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:44:46Z" level=error msg="open /run/runc: no such file or directory"
	I1213 11:44:46.808665  569296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:44:46.821896  569296 pause.go:52] kubelet running: false
	I1213 11:44:46.821961  569296 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 11:44:46.958708  569296 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 11:44:46.958784  569296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 11:44:47.034515  569296 cri.go:89] found id: "a8e21604e02771850162284cea6434cc4c66c53872a261c1d6c46f67be07725e"
	I1213 11:44:47.034590  569296 cri.go:89] found id: "1d4978198e9f7a6747ea2655f6b6de52519dd4ed62f82a58b2e2c4e1ef98bbbe"
	I1213 11:44:47.034609  569296 cri.go:89] found id: "77ddb3b808ba3a3c66122b470193d17e950914cc192aeaa1ce537982978c176c"
	I1213 11:44:47.034628  569296 cri.go:89] found id: "44ffe7a6e162c4fee0b1aa729d805e20f9e028840e4af6b9ad5c9a656939c373"
	I1213 11:44:47.034671  569296 cri.go:89] found id: "69ba67493b5e941069b4105a9966d8e72af04aa74f3451c67b59e1792d39f2a7"
	I1213 11:44:47.034695  569296 cri.go:89] found id: "84a10b3ea40e6601774b3e6db2ab95691ab784e51dfcf7098ff258d7a9b6e9c8"
	I1213 11:44:47.034715  569296 cri.go:89] found id: "2000805fb9746ade340bf4b45bcf2b3c8530d52c36d96e3640838f94c9200163"
	I1213 11:44:47.034733  569296 cri.go:89] found id: "e1b0b540fa92caa18713aeeae9ab47a45eaf1e6ff2bb9db8d90bdbafb6ac6006"
	I1213 11:44:47.034769  569296 cri.go:89] found id: "2a7de57d4a05a6dbf2502d22d4bbcaa154c070ec7964da22579d06bac5b9eb78"
	I1213 11:44:47.034788  569296 cri.go:89] found id: "b42074f6e347ec4d0c964e8198f85e6532b0014bb1e60750a212fd0a9a4a62a1"
	I1213 11:44:47.034804  569296 cri.go:89] found id: "9a3d70eeb43c6a48062b267c0c45c9e29ff5980c55cb4522b094c5996bcc7629"
	I1213 11:44:47.034837  569296 cri.go:89] found id: "6c2fc3a72c623073db614eabbbfa9a65bc0404d50c8390c5b66c60b9c9862e42"
	I1213 11:44:47.034860  569296 cri.go:89] found id: "efb2aed1c0f23cd8fd9bd689dbbbcc305a654f6d718d16aded0b188a0f85575e"
	I1213 11:44:47.034880  569296 cri.go:89] found id: "e7c15552f9059fa83530a9bacc61ee1252c8b4a381d93663a6f446492911bbf0"
	I1213 11:44:47.034899  569296 cri.go:89] found id: ""
	I1213 11:44:47.034976  569296 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 11:44:47.052896  569296 out.go:203] 
	W1213 11:44:47.055903  569296 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:44:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:44:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 11:44:47.055992  569296 out.go:285] * 
	* 
	W1213 11:44:47.062449  569296 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:44:47.066499  569296 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-649359 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-649359
helpers_test.go:244: (dbg) docker inspect pause-649359:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99b47e81a3f13292218dfd0776f61df09821bc28964319588da9055bed7d0bb5",
	        "Created": "2025-12-13T11:43:25.734543259Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 565518,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:43:25.802426158Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/99b47e81a3f13292218dfd0776f61df09821bc28964319588da9055bed7d0bb5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99b47e81a3f13292218dfd0776f61df09821bc28964319588da9055bed7d0bb5/hostname",
	        "HostsPath": "/var/lib/docker/containers/99b47e81a3f13292218dfd0776f61df09821bc28964319588da9055bed7d0bb5/hosts",
	        "LogPath": "/var/lib/docker/containers/99b47e81a3f13292218dfd0776f61df09821bc28964319588da9055bed7d0bb5/99b47e81a3f13292218dfd0776f61df09821bc28964319588da9055bed7d0bb5-json.log",
	        "Name": "/pause-649359",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-649359:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-649359",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99b47e81a3f13292218dfd0776f61df09821bc28964319588da9055bed7d0bb5",
	                "LowerDir": "/var/lib/docker/overlay2/d82b2d3bc4c6388d5608e2ed10fce3f78764b8e21901ca46de7566feed89499d-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d82b2d3bc4c6388d5608e2ed10fce3f78764b8e21901ca46de7566feed89499d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d82b2d3bc4c6388d5608e2ed10fce3f78764b8e21901ca46de7566feed89499d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d82b2d3bc4c6388d5608e2ed10fce3f78764b8e21901ca46de7566feed89499d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-649359",
	                "Source": "/var/lib/docker/volumes/pause-649359/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-649359",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-649359",
	                "name.minikube.sigs.k8s.io": "pause-649359",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aef66c83926f666d2d814ff6f1bfd1902165a26a0c4418b457f823e86fd248bb",
	            "SandboxKey": "/var/run/docker/netns/aef66c83926f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33403"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33404"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-649359": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:e5:d4:93:f3:b8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1ab2f13bbcc6d97c28d99aa61549f171d25dc9750145fceb3b1056f211a386c9",
	                    "EndpointID": "ecef122a174d39f6ce22594f34fdba1c9d4e77b64ddc89f14f9ac9e8abc0a3ba",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-649359",
	                        "99b47e81a3f1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-649359 -n pause-649359
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-649359 -n pause-649359: exit status 2 (354.67051ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-649359 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-649359 logs -n 25: (1.374668995s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-627673 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-627673       │ jenkins │ v1.37.0 │ 13 Dec 25 11:31 UTC │ 13 Dec 25 11:31 UTC │
	│ start   │ -p missing-upgrade-438132 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-438132    │ jenkins │ v1.35.0 │ 13 Dec 25 11:31 UTC │ 13 Dec 25 11:32 UTC │
	│ start   │ -p NoKubernetes-627673 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-627673       │ jenkins │ v1.37.0 │ 13 Dec 25 11:31 UTC │ 13 Dec 25 11:32 UTC │
	│ start   │ -p missing-upgrade-438132 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-438132    │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │ 13 Dec 25 11:33 UTC │
	│ delete  │ -p NoKubernetes-627673                                                                                                                          │ NoKubernetes-627673       │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │ 13 Dec 25 11:32 UTC │
	│ start   │ -p NoKubernetes-627673 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-627673       │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │ 13 Dec 25 11:32 UTC │
	│ ssh     │ -p NoKubernetes-627673 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-627673       │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ stop    │ -p NoKubernetes-627673                                                                                                                          │ NoKubernetes-627673       │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │ 13 Dec 25 11:32 UTC │
	│ start   │ -p NoKubernetes-627673 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-627673       │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │ 13 Dec 25 11:32 UTC │
	│ ssh     │ -p NoKubernetes-627673 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-627673       │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ delete  │ -p NoKubernetes-627673                                                                                                                          │ NoKubernetes-627673       │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │ 13 Dec 25 11:32 UTC │
	│ start   │ -p kubernetes-upgrade-854588 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-854588 │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │ 13 Dec 25 11:33 UTC │
	│ delete  │ -p missing-upgrade-438132                                                                                                                       │ missing-upgrade-438132    │ jenkins │ v1.37.0 │ 13 Dec 25 11:33 UTC │ 13 Dec 25 11:33 UTC │
	│ start   │ -p stopped-upgrade-558323 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-558323    │ jenkins │ v1.35.0 │ 13 Dec 25 11:33 UTC │ 13 Dec 25 11:33 UTC │
	│ stop    │ -p kubernetes-upgrade-854588                                                                                                                    │ kubernetes-upgrade-854588 │ jenkins │ v1.37.0 │ 13 Dec 25 11:33 UTC │ 13 Dec 25 11:33 UTC │
	│ start   │ -p kubernetes-upgrade-854588 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-854588 │ jenkins │ v1.37.0 │ 13 Dec 25 11:33 UTC │                     │
	│ stop    │ stopped-upgrade-558323 stop                                                                                                                     │ stopped-upgrade-558323    │ jenkins │ v1.35.0 │ 13 Dec 25 11:33 UTC │ 13 Dec 25 11:33 UTC │
	│ start   │ -p stopped-upgrade-558323 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-558323    │ jenkins │ v1.37.0 │ 13 Dec 25 11:33 UTC │ 13 Dec 25 11:38 UTC │
	│ delete  │ -p stopped-upgrade-558323                                                                                                                       │ stopped-upgrade-558323    │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:38 UTC │
	│ start   │ -p running-upgrade-686784 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-686784    │ jenkins │ v1.35.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:38 UTC │
	│ start   │ -p running-upgrade-686784 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-686784    │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:43 UTC │
	│ delete  │ -p running-upgrade-686784                                                                                                                       │ running-upgrade-686784    │ jenkins │ v1.37.0 │ 13 Dec 25 11:43 UTC │ 13 Dec 25 11:43 UTC │
	│ start   │ -p pause-649359 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-649359              │ jenkins │ v1.37.0 │ 13 Dec 25 11:43 UTC │ 13 Dec 25 11:44 UTC │
	│ start   │ -p pause-649359 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-649359              │ jenkins │ v1.37.0 │ 13 Dec 25 11:44 UTC │ 13 Dec 25 11:44 UTC │
	│ pause   │ -p pause-649359 --alsologtostderr -v=5                                                                                                          │ pause-649359              │ jenkins │ v1.37.0 │ 13 Dec 25 11:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:44:15
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:44:15.332656  567960 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:44:15.332895  567960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:44:15.332925  567960 out.go:374] Setting ErrFile to fd 2...
	I1213 11:44:15.332951  567960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:44:15.333356  567960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:44:15.333943  567960 out.go:368] Setting JSON to false
	I1213 11:44:15.335296  567960 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12408,"bootTime":1765613848,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:44:15.335420  567960 start.go:143] virtualization:  
	I1213 11:44:15.338696  567960 out.go:179] * [pause-649359] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:44:15.342647  567960 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:44:15.342748  567960 notify.go:221] Checking for updates...
	I1213 11:44:15.348613  567960 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:44:15.351881  567960 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:44:15.354963  567960 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:44:15.357836  567960 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:44:15.360819  567960 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:44:15.364271  567960 config.go:182] Loaded profile config "pause-649359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:44:15.364857  567960 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:44:15.395677  567960 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:44:15.395862  567960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:44:15.459601  567960 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 11:44:15.450151428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:44:15.459719  567960 docker.go:319] overlay module found
	I1213 11:44:15.462884  567960 out.go:179] * Using the docker driver based on existing profile
	I1213 11:44:15.465758  567960 start.go:309] selected driver: docker
	I1213 11:44:15.465782  567960 start.go:927] validating driver "docker" against &{Name:pause-649359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-649359 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:44:15.465927  567960 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:44:15.466041  567960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:44:15.532521  567960 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 11:44:15.515810008 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:44:15.532968  567960 cni.go:84] Creating CNI manager for ""
	I1213 11:44:15.533040  567960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:44:15.533093  567960 start.go:353] cluster config:
	{Name:pause-649359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-649359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:44:15.537457  567960 out.go:179] * Starting "pause-649359" primary control-plane node in "pause-649359" cluster
	I1213 11:44:15.540487  567960 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:44:15.543468  567960 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:44:15.546434  567960 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 11:44:15.546490  567960 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 11:44:15.546510  567960 cache.go:65] Caching tarball of preloaded images
	I1213 11:44:15.546520  567960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:44:15.546628  567960 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 11:44:15.546639  567960 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 11:44:15.546796  567960 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/config.json ...
	I1213 11:44:15.572105  567960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:44:15.572124  567960 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:44:15.572139  567960 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:44:15.572172  567960 start.go:360] acquireMachinesLock for pause-649359: {Name:mk9590dc8cde3ee1d19bd97e7fbcc07ea89a081b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:44:15.572225  567960 start.go:364] duration metric: took 36.029µs to acquireMachinesLock for "pause-649359"
	I1213 11:44:15.572245  567960 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:44:15.572250  567960 fix.go:54] fixHost starting: 
	I1213 11:44:15.572523  567960 cli_runner.go:164] Run: docker container inspect pause-649359 --format={{.State.Status}}
	I1213 11:44:15.600325  567960 fix.go:112] recreateIfNeeded on pause-649359: state=Running err=<nil>
	W1213 11:44:15.600356  567960 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:44:15.603600  567960 out.go:252] * Updating the running docker "pause-649359" container ...
	I1213 11:44:15.603640  567960 machine.go:94] provisionDockerMachine start ...
	I1213 11:44:15.603730  567960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-649359
	I1213 11:44:15.621207  567960 main.go:143] libmachine: Using SSH client type: native
	I1213 11:44:15.621533  567960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1213 11:44:15.621549  567960 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:44:15.771262  567960 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-649359
	
	I1213 11:44:15.771286  567960 ubuntu.go:182] provisioning hostname "pause-649359"
	I1213 11:44:15.771356  567960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-649359
	I1213 11:44:15.789364  567960 main.go:143] libmachine: Using SSH client type: native
	I1213 11:44:15.789691  567960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1213 11:44:15.789706  567960 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-649359 && echo "pause-649359" | sudo tee /etc/hostname
	I1213 11:44:15.952187  567960 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-649359
	
	I1213 11:44:15.952263  567960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-649359
	I1213 11:44:15.968857  567960 main.go:143] libmachine: Using SSH client type: native
	I1213 11:44:15.969165  567960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1213 11:44:15.969189  567960 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-649359' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-649359/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-649359' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:44:16.129494  567960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:44:16.129528  567960 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 11:44:16.129553  567960 ubuntu.go:190] setting up certificates
	I1213 11:44:16.129563  567960 provision.go:84] configureAuth start
	I1213 11:44:16.129650  567960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-649359
	I1213 11:44:16.153573  567960 provision.go:143] copyHostCerts
	I1213 11:44:16.153647  567960 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 11:44:16.153659  567960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 11:44:16.153736  567960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 11:44:16.153847  567960 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 11:44:16.153859  567960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 11:44:16.153887  567960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 11:44:16.153955  567960 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 11:44:16.153966  567960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 11:44:16.153990  567960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 11:44:16.154048  567960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.pause-649359 san=[127.0.0.1 192.168.85.2 localhost minikube pause-649359]
	I1213 11:44:16.291794  567960 provision.go:177] copyRemoteCerts
	I1213 11:44:16.291870  567960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:44:16.291911  567960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-649359
	I1213 11:44:16.313250  567960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/pause-649359/id_rsa Username:docker}
	I1213 11:44:16.419655  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:44:16.437800  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 11:44:16.455492  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:44:16.472567  567960 provision.go:87] duration metric: took 342.972977ms to configureAuth
	I1213 11:44:16.472598  567960 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:44:16.472831  567960 config.go:182] Loaded profile config "pause-649359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:44:16.472947  567960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-649359
	I1213 11:44:16.490587  567960 main.go:143] libmachine: Using SSH client type: native
	I1213 11:44:16.490914  567960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1213 11:44:16.490935  567960 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:44:21.872109  567960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:44:21.872130  567960 machine.go:97] duration metric: took 6.268481319s to provisionDockerMachine
	I1213 11:44:21.872142  567960 start.go:293] postStartSetup for "pause-649359" (driver="docker")
	I1213 11:44:21.872152  567960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:44:21.872235  567960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:44:21.872310  567960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-649359
	I1213 11:44:21.891605  567960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/pause-649359/id_rsa Username:docker}
	I1213 11:44:22.001001  567960 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:44:22.009231  567960 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:44:22.009265  567960 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:44:22.009279  567960 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 11:44:22.009348  567960 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 11:44:22.009446  567960 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 11:44:22.009563  567960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:44:22.018693  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:44:22.037698  567960 start.go:296] duration metric: took 165.539838ms for postStartSetup
	I1213 11:44:22.037786  567960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:44:22.037844  567960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-649359
	I1213 11:44:22.055901  567960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/pause-649359/id_rsa Username:docker}
	I1213 11:44:22.156785  567960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:44:22.161855  567960 fix.go:56] duration metric: took 6.589597447s for fixHost
	I1213 11:44:22.161881  567960 start.go:83] releasing machines lock for "pause-649359", held for 6.589646103s
	I1213 11:44:22.161959  567960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-649359
	I1213 11:44:22.178625  567960 ssh_runner.go:195] Run: cat /version.json
	I1213 11:44:22.178658  567960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:44:22.178676  567960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-649359
	I1213 11:44:22.178713  567960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-649359
	I1213 11:44:22.201640  567960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/pause-649359/id_rsa Username:docker}
	I1213 11:44:22.204101  567960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/pause-649359/id_rsa Username:docker}
	I1213 11:44:22.303893  567960 ssh_runner.go:195] Run: systemctl --version
	I1213 11:44:22.397250  567960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:44:22.438251  567960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:44:22.442803  567960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:44:22.442893  567960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:44:22.450995  567960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 11:44:22.451019  567960 start.go:496] detecting cgroup driver to use...
	I1213 11:44:22.451049  567960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:44:22.451095  567960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:44:22.467266  567960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:44:22.480781  567960 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:44:22.480881  567960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:44:22.495921  567960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:44:22.509929  567960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:44:22.644567  567960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:44:22.781658  567960 docker.go:234] disabling docker service ...
	I1213 11:44:22.781802  567960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:44:22.796700  567960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:44:22.810762  567960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:44:22.971250  567960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:44:23.131932  567960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:44:23.144718  567960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:44:23.158651  567960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 11:44:23.158723  567960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:44:23.167816  567960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:44:23.167940  567960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:44:23.176293  567960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:44:23.184963  567960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:44:23.193556  567960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:44:23.201597  567960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:44:23.209940  567960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:44:23.218758  567960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:44:23.227368  567960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:44:23.234924  567960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:44:23.242088  567960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:44:23.370933  567960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:44:23.561204  567960 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:44:23.561288  567960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:44:23.565075  567960 start.go:564] Will wait 60s for crictl version
	I1213 11:44:23.565157  567960 ssh_runner.go:195] Run: which crictl
	I1213 11:44:23.568621  567960 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:44:23.596492  567960 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:44:23.596594  567960 ssh_runner.go:195] Run: crio --version
	I1213 11:44:23.627483  567960 ssh_runner.go:195] Run: crio --version
	I1213 11:44:23.658992  567960 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 11:44:23.661841  567960 cli_runner.go:164] Run: docker network inspect pause-649359 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:44:23.677986  567960 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 11:44:23.681707  567960 kubeadm.go:884] updating cluster {Name:pause-649359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-649359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:44:23.681855  567960 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 11:44:23.681913  567960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:44:23.716376  567960 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:44:23.716402  567960 crio.go:433] Images already preloaded, skipping extraction
	I1213 11:44:23.716456  567960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:44:23.741340  567960 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:44:23.741363  567960 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:44:23.741371  567960 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1213 11:44:23.741470  567960 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-649359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-649359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:44:23.741550  567960 ssh_runner.go:195] Run: crio config
	I1213 11:44:23.805903  567960 cni.go:84] Creating CNI manager for ""
	I1213 11:44:23.805927  567960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:44:23.805948  567960 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:44:23.805970  567960 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-649359 NodeName:pause-649359 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:44:23.806100  567960 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-649359"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:44:23.806178  567960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 11:44:23.815367  567960 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:44:23.815444  567960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:44:23.824054  567960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1213 11:44:23.839983  567960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:44:23.852699  567960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1213 11:44:23.865638  567960 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:44:23.869807  567960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:44:23.997053  567960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:44:24.016509  567960 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359 for IP: 192.168.85.2
	I1213 11:44:24.016533  567960 certs.go:195] generating shared ca certs ...
	I1213 11:44:24.016550  567960 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:44:24.016769  567960 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:44:24.016844  567960 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:44:24.016859  567960 certs.go:257] generating profile certs ...
	I1213 11:44:24.016986  567960 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/client.key
	I1213 11:44:24.017108  567960 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/apiserver.key.afbfc0e7
	I1213 11:44:24.017192  567960 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/proxy-client.key
	I1213 11:44:24.017339  567960 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:44:24.017399  567960 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:44:24.017415  567960 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:44:24.017460  567960 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:44:24.017516  567960 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:44:24.017555  567960 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:44:24.017642  567960 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:44:24.018305  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:44:24.037932  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:44:24.057937  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:44:24.076620  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:44:24.097121  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 11:44:24.115401  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:44:24.134375  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:44:24.153524  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:44:24.175310  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:44:24.194239  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:44:24.212849  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:44:24.230874  567960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:44:24.244380  567960 ssh_runner.go:195] Run: openssl version
	I1213 11:44:24.250729  567960 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:44:24.258574  567960 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:44:24.266718  567960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:44:24.270585  567960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:44:24.270655  567960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:44:24.312210  567960 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:44:24.320083  567960 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:44:24.328244  567960 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:44:24.336362  567960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:44:24.340370  567960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:44:24.340436  567960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:44:24.383264  567960 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:44:24.391563  567960 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:44:24.399492  567960 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:44:24.407768  567960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:44:24.412072  567960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:44:24.412178  567960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:44:24.453922  567960 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:44:24.461701  567960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:44:24.465631  567960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:44:24.513241  567960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:44:24.595357  567960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:44:24.681867  567960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:44:24.792030  567960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:44:24.908306  567960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:44:24.980068  567960 kubeadm.go:401] StartCluster: {Name:pause-649359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-649359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:44:24.980251  567960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:44:24.980345  567960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:44:25.026336  567960 cri.go:89] found id: "1d4978198e9f7a6747ea2655f6b6de52519dd4ed62f82a58b2e2c4e1ef98bbbe"
	I1213 11:44:25.026411  567960 cri.go:89] found id: "77ddb3b808ba3a3c66122b470193d17e950914cc192aeaa1ce537982978c176c"
	I1213 11:44:25.026446  567960 cri.go:89] found id: "44ffe7a6e162c4fee0b1aa729d805e20f9e028840e4af6b9ad5c9a656939c373"
	I1213 11:44:25.026482  567960 cri.go:89] found id: "69ba67493b5e941069b4105a9966d8e72af04aa74f3451c67b59e1792d39f2a7"
	I1213 11:44:25.026501  567960 cri.go:89] found id: "84a10b3ea40e6601774b3e6db2ab95691ab784e51dfcf7098ff258d7a9b6e9c8"
	I1213 11:44:25.026536  567960 cri.go:89] found id: "2000805fb9746ade340bf4b45bcf2b3c8530d52c36d96e3640838f94c9200163"
	I1213 11:44:25.026559  567960 cri.go:89] found id: "e1b0b540fa92caa18713aeeae9ab47a45eaf1e6ff2bb9db8d90bdbafb6ac6006"
	I1213 11:44:25.026580  567960 cri.go:89] found id: "2a7de57d4a05a6dbf2502d22d4bbcaa154c070ec7964da22579d06bac5b9eb78"
	I1213 11:44:25.026613  567960 cri.go:89] found id: "b42074f6e347ec4d0c964e8198f85e6532b0014bb1e60750a212fd0a9a4a62a1"
	I1213 11:44:25.026650  567960 cri.go:89] found id: "9a3d70eeb43c6a48062b267c0c45c9e29ff5980c55cb4522b094c5996bcc7629"
	I1213 11:44:25.026669  567960 cri.go:89] found id: "6c2fc3a72c623073db614eabbbfa9a65bc0404d50c8390c5b66c60b9c9862e42"
	I1213 11:44:25.026702  567960 cri.go:89] found id: "efb2aed1c0f23cd8fd9bd689dbbbcc305a654f6d718d16aded0b188a0f85575e"
	I1213 11:44:25.026726  567960 cri.go:89] found id: "e7c15552f9059fa83530a9bacc61ee1252c8b4a381d93663a6f446492911bbf0"
	I1213 11:44:25.026746  567960 cri.go:89] found id: ""
	I1213 11:44:25.026832  567960 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 11:44:25.044174  567960 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:44:25Z" level=error msg="open /run/runc: no such file or directory"
	I1213 11:44:25.044247  567960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:44:25.068274  567960 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 11:44:25.068295  567960 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 11:44:25.068370  567960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:44:25.079797  567960 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:44:25.080652  567960 kubeconfig.go:125] found "pause-649359" server: "https://192.168.85.2:8443"
	I1213 11:44:25.085071  567960 kapi.go:59] client config for pause-649359: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/client.key", CAFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 11:44:25.094686  567960 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 11:44:25.094929  567960 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 11:44:25.094962  567960 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 11:44:25.094983  567960 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 11:44:25.095017  567960 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 11:44:25.096820  567960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:44:25.108471  567960 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 11:44:25.108551  567960 kubeadm.go:602] duration metric: took 40.249258ms to restartPrimaryControlPlane
	I1213 11:44:25.108576  567960 kubeadm.go:403] duration metric: took 128.516358ms to StartCluster
	I1213 11:44:25.108627  567960 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:44:25.108725  567960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:44:25.109742  567960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:44:25.110048  567960 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:44:25.110446  567960 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:44:25.110964  567960 config.go:182] Loaded profile config "pause-649359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:44:25.116451  567960 out.go:179] * Enabled addons: 
	I1213 11:44:25.116587  567960 out.go:179] * Verifying Kubernetes components...
	I1213 11:44:25.119421  567960 addons.go:530] duration metric: took 8.973209ms for enable addons: enabled=[]
	I1213 11:44:25.119479  567960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:44:25.415374  567960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:44:25.437526  567960 node_ready.go:35] waiting up to 6m0s for node "pause-649359" to be "Ready" ...
	I1213 11:44:28.437529  567960 node_ready.go:49] node "pause-649359" is "Ready"
	I1213 11:44:28.437610  567960 node_ready.go:38] duration metric: took 3.000047198s for node "pause-649359" to be "Ready" ...
	I1213 11:44:28.437640  567960 api_server.go:52] waiting for apiserver process to appear ...
	I1213 11:44:28.437736  567960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:44:28.449599  567960 api_server.go:72] duration metric: took 3.339481227s to wait for apiserver process to appear ...
	I1213 11:44:28.449681  567960 api_server.go:88] waiting for apiserver healthz status ...
	I1213 11:44:28.449734  567960 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 11:44:28.496200  567960 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 11:44:28.496285  567960 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 11:44:28.949856  567960 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 11:44:28.960439  567960 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 11:44:28.960519  567960 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 11:44:29.449864  567960 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 11:44:29.464235  567960 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1213 11:44:29.466391  567960 api_server.go:141] control plane version: v1.34.2
	I1213 11:44:29.466470  567960 api_server.go:131] duration metric: took 1.016751508s to wait for apiserver health ...
	I1213 11:44:29.466495  567960 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 11:44:29.471392  567960 system_pods.go:59] 7 kube-system pods found
	I1213 11:44:29.471439  567960 system_pods.go:61] "coredns-66bc5c9577-g2449" [e851be5d-0744-4a63-8a57-05546a6999f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:44:29.471450  567960 system_pods.go:61] "etcd-pause-649359" [8bec0f3b-3cfd-49d4-a9b4-1b5899a24d2a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 11:44:29.471456  567960 system_pods.go:61] "kindnet-dlvx8" [ea100ffe-c03c-495d-bb02-d7340382cb8b] Running
	I1213 11:44:29.471464  567960 system_pods.go:61] "kube-apiserver-pause-649359" [b7810941-e870-480f-be70-6b27f530961c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 11:44:29.471471  567960 system_pods.go:61] "kube-controller-manager-pause-649359" [6bc3d848-1c2b-4fd9-b647-91ac5edd2968] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 11:44:29.471480  567960 system_pods.go:61] "kube-proxy-4p5n9" [be1a6262-3cc1-43f5-8671-3e19f21ba33e] Running
	I1213 11:44:29.471486  567960 system_pods.go:61] "kube-scheduler-pause-649359" [47261308-7ddf-41ea-a700-278e13986378] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 11:44:29.471492  567960 system_pods.go:74] duration metric: took 4.977192ms to wait for pod list to return data ...
	I1213 11:44:29.471508  567960 default_sa.go:34] waiting for default service account to be created ...
	I1213 11:44:29.473995  567960 default_sa.go:45] found service account: "default"
	I1213 11:44:29.474068  567960 default_sa.go:55] duration metric: took 2.527365ms for default service account to be created ...
	I1213 11:44:29.474108  567960 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 11:44:29.478462  567960 system_pods.go:86] 7 kube-system pods found
	I1213 11:44:29.478499  567960 system_pods.go:89] "coredns-66bc5c9577-g2449" [e851be5d-0744-4a63-8a57-05546a6999f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:44:29.478510  567960 system_pods.go:89] "etcd-pause-649359" [8bec0f3b-3cfd-49d4-a9b4-1b5899a24d2a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 11:44:29.478516  567960 system_pods.go:89] "kindnet-dlvx8" [ea100ffe-c03c-495d-bb02-d7340382cb8b] Running
	I1213 11:44:29.478523  567960 system_pods.go:89] "kube-apiserver-pause-649359" [b7810941-e870-480f-be70-6b27f530961c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 11:44:29.478530  567960 system_pods.go:89] "kube-controller-manager-pause-649359" [6bc3d848-1c2b-4fd9-b647-91ac5edd2968] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 11:44:29.478538  567960 system_pods.go:89] "kube-proxy-4p5n9" [be1a6262-3cc1-43f5-8671-3e19f21ba33e] Running
	I1213 11:44:29.478544  567960 system_pods.go:89] "kube-scheduler-pause-649359" [47261308-7ddf-41ea-a700-278e13986378] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 11:44:29.478550  567960 system_pods.go:126] duration metric: took 4.418715ms to wait for k8s-apps to be running ...
	I1213 11:44:29.478589  567960 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 11:44:29.478658  567960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:44:29.498704  567960 system_svc.go:56] duration metric: took 20.106152ms WaitForService to wait for kubelet
	I1213 11:44:29.498736  567960 kubeadm.go:587] duration metric: took 4.388622719s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:44:29.498755  567960 node_conditions.go:102] verifying NodePressure condition ...
	I1213 11:44:29.504601  567960 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 11:44:29.504645  567960 node_conditions.go:123] node cpu capacity is 2
	I1213 11:44:29.504658  567960 node_conditions.go:105] duration metric: took 5.89746ms to run NodePressure ...
	I1213 11:44:29.504672  567960 start.go:242] waiting for startup goroutines ...
	I1213 11:44:29.504680  567960 start.go:247] waiting for cluster config update ...
	I1213 11:44:29.504701  567960 start.go:256] writing updated cluster config ...
	I1213 11:44:29.505053  567960 ssh_runner.go:195] Run: rm -f paused
	I1213 11:44:29.510956  567960 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 11:44:29.511748  567960 kapi.go:59] client config for pause-649359: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/client.key", CAFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 11:44:29.517163  567960 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-g2449" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 11:44:31.522992  567960 pod_ready.go:104] pod "coredns-66bc5c9577-g2449" is not "Ready", error: <nil>
	W1213 11:44:33.527961  567960 pod_ready.go:104] pod "coredns-66bc5c9577-g2449" is not "Ready", error: <nil>
	W1213 11:44:36.023418  567960 pod_ready.go:104] pod "coredns-66bc5c9577-g2449" is not "Ready", error: <nil>
	I1213 11:44:37.023804  567960 pod_ready.go:94] pod "coredns-66bc5c9577-g2449" is "Ready"
	I1213 11:44:37.023837  567960 pod_ready.go:86] duration metric: took 7.506647215s for pod "coredns-66bc5c9577-g2449" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:44:37.027394  567960 pod_ready.go:83] waiting for pod "etcd-pause-649359" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 11:44:39.033572  567960 pod_ready.go:104] pod "etcd-pause-649359" is not "Ready", error: <nil>
	W1213 11:44:41.035919  567960 pod_ready.go:104] pod "etcd-pause-649359" is not "Ready", error: <nil>
	W1213 11:44:43.533057  567960 pod_ready.go:104] pod "etcd-pause-649359" is not "Ready", error: <nil>
	I1213 11:44:44.034694  567960 pod_ready.go:94] pod "etcd-pause-649359" is "Ready"
	I1213 11:44:44.034724  567960 pod_ready.go:86] duration metric: took 7.00730098s for pod "etcd-pause-649359" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:44:44.038051  567960 pod_ready.go:83] waiting for pod "kube-apiserver-pause-649359" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:44:44.044067  567960 pod_ready.go:94] pod "kube-apiserver-pause-649359" is "Ready"
	I1213 11:44:44.044099  567960 pod_ready.go:86] duration metric: took 6.019413ms for pod "kube-apiserver-pause-649359" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:44:44.047195  567960 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-649359" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:44:44.053164  567960 pod_ready.go:94] pod "kube-controller-manager-pause-649359" is "Ready"
	I1213 11:44:44.053196  567960 pod_ready.go:86] duration metric: took 5.970962ms for pod "kube-controller-manager-pause-649359" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:44:44.056461  567960 pod_ready.go:83] waiting for pod "kube-proxy-4p5n9" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:44:44.231436  567960 pod_ready.go:94] pod "kube-proxy-4p5n9" is "Ready"
	I1213 11:44:44.231467  567960 pod_ready.go:86] duration metric: took 174.976547ms for pod "kube-proxy-4p5n9" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:44:44.430513  567960 pod_ready.go:83] waiting for pod "kube-scheduler-pause-649359" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:44:44.831361  567960 pod_ready.go:94] pod "kube-scheduler-pause-649359" is "Ready"
	I1213 11:44:44.831388  567960 pod_ready.go:86] duration metric: took 400.84446ms for pod "kube-scheduler-pause-649359" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:44:44.831401  567960 pod_ready.go:40] duration metric: took 15.32039938s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 11:44:44.883862  567960 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1213 11:44:44.887266  567960 out.go:179] * Done! kubectl is now configured to use "pause-649359" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 11:44:24 pause-649359 crio[2134]: time="2025-12-13T11:44:24.891313689Z" level=info msg="Started container" PID=2405 containerID=77ddb3b808ba3a3c66122b470193d17e950914cc192aeaa1ce537982978c176c description=kube-system/coredns-66bc5c9577-g2449/coredns id=1ba4f64e-744f-4b5e-b580-b90fb81ac8f9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e1ba17b8c6fb5148f17d586be6c9c21c081de4667ddd07449123fe26bc1429e8
	Dec 13 11:44:24 pause-649359 crio[2134]: time="2025-12-13T11:44:24.927782824Z" level=info msg="Created container 1d4978198e9f7a6747ea2655f6b6de52519dd4ed62f82a58b2e2c4e1ef98bbbe: kube-system/etcd-pause-649359/etcd" id=b178e0cc-34cd-4fc3-8d07-ea691a14ef57 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:44:24 pause-649359 crio[2134]: time="2025-12-13T11:44:24.930680382Z" level=info msg="Starting container: 1d4978198e9f7a6747ea2655f6b6de52519dd4ed62f82a58b2e2c4e1ef98bbbe" id=04573513-840d-4a1a-bc69-44f1ef6204c7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:44:24 pause-649359 crio[2134]: time="2025-12-13T11:44:24.935653094Z" level=info msg="Started container" PID=2411 containerID=1d4978198e9f7a6747ea2655f6b6de52519dd4ed62f82a58b2e2c4e1ef98bbbe description=kube-system/etcd-pause-649359/etcd id=04573513-840d-4a1a-bc69-44f1ef6204c7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=97f636fbd4353e095d1a7d30fdb5c7308acd123f1a4d1c556b44c89447b3fd34
	Dec 13 11:44:25 pause-649359 crio[2134]: time="2025-12-13T11:44:25.38303171Z" level=info msg="Created container a8e21604e02771850162284cea6434cc4c66c53872a261c1d6c46f67be07725e: kube-system/kube-proxy-4p5n9/kube-proxy" id=49aa4f2a-f3da-4def-a690-18d12ecbb4ca name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:44:25 pause-649359 crio[2134]: time="2025-12-13T11:44:25.383856568Z" level=info msg="Starting container: a8e21604e02771850162284cea6434cc4c66c53872a261c1d6c46f67be07725e" id=43480b78-8ada-496b-943f-d90c1086c26c name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:44:25 pause-649359 crio[2134]: time="2025-12-13T11:44:25.38636996Z" level=info msg="Started container" PID=2414 containerID=a8e21604e02771850162284cea6434cc4c66c53872a261c1d6c46f67be07725e description=kube-system/kube-proxy-4p5n9/kube-proxy id=43480b78-8ada-496b-943f-d90c1086c26c name=/runtime.v1.RuntimeService/StartContainer sandboxID=bbd0d9b2bb111ca275da2cc8eac154610425f9e1660d17fccd65affdacbc87e9
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.228168106Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.231777685Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.231815306Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.231838502Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.235335062Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.235374948Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.235397315Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.238623162Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.238657649Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.238680607Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.2418123Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.241845604Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.241868431Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.244885449Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.244920427Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.244942376Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.248016067Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.248048272Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a8e21604e0277       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                     23 seconds ago       Running             kube-proxy                1                   bbd0d9b2bb111       kube-proxy-4p5n9                       kube-system
	1d4978198e9f7       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                     23 seconds ago       Running             etcd                      1                   97f636fbd4353       etcd-pause-649359                      kube-system
	77ddb3b808ba3       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                     23 seconds ago       Running             coredns                   1                   e1ba17b8c6fb5       coredns-66bc5c9577-g2449               kube-system
	44ffe7a6e162c       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     23 seconds ago       Running             kindnet-cni               1                   1f898fa729ff0       kindnet-dlvx8                          kube-system
	69ba67493b5e9       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                     23 seconds ago       Running             kube-controller-manager   1                   2fc9a3d4e2f7e       kube-controller-manager-pause-649359   kube-system
	84a10b3ea40e6       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                     23 seconds ago       Running             kube-apiserver            1                   40a8337351a1f       kube-apiserver-pause-649359            kube-system
	2000805fb9746       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                     23 seconds ago       Running             kube-scheduler            1                   71779f4f5c867       kube-scheduler-pause-649359            kube-system
	e1b0b540fa92c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                     35 seconds ago       Exited              coredns                   0                   e1ba17b8c6fb5       coredns-66bc5c9577-g2449               kube-system
	2a7de57d4a05a       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   45 seconds ago       Exited              kindnet-cni               0                   1f898fa729ff0       kindnet-dlvx8                          kube-system
	b42074f6e347e       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                     48 seconds ago       Exited              kube-proxy                0                   bbd0d9b2bb111       kube-proxy-4p5n9                       kube-system
	9a3d70eeb43c6       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                     About a minute ago   Exited              kube-apiserver            0                   40a8337351a1f       kube-apiserver-pause-649359            kube-system
	6c2fc3a72c623       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                     About a minute ago   Exited              kube-scheduler            0                   71779f4f5c867       kube-scheduler-pause-649359            kube-system
	efb2aed1c0f23       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                     About a minute ago   Exited              kube-controller-manager   0                   2fc9a3d4e2f7e       kube-controller-manager-pause-649359   kube-system
	e7c15552f9059       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                     About a minute ago   Exited              etcd                      0                   97f636fbd4353       etcd-pause-649359                      kube-system
	
	
	==> coredns [77ddb3b808ba3a3c66122b470193d17e950914cc192aeaa1ce537982978c176c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37184 - 58638 "HINFO IN 5910939681083719222.9078383055485944623. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.06058765s
	
	
	==> coredns [e1b0b540fa92caa18713aeeae9ab47a45eaf1e6ff2bb9db8d90bdbafb6ac6006] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49342 - 14843 "HINFO IN 1904354533378192754.2073838843292775205. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022532813s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-649359
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-649359
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
	                    minikube.k8s.io/name=pause-649359
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T11_43_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 11:43:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-649359
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 11:44:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 11:44:25 +0000   Sat, 13 Dec 2025 11:43:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 11:44:25 +0000   Sat, 13 Dec 2025 11:43:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 11:44:25 +0000   Sat, 13 Dec 2025 11:43:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 11:44:25 +0000   Sat, 13 Dec 2025 11:44:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-649359
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                97c343b6-2965-464e-b5a7-2664ffc95532
	  Boot ID:                    9bd24839-35d9-4392-a0e0-b2e0b9823eaa
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-g2449                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     49s
	  kube-system                 etcd-pause-649359                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         54s
	  kube-system                 kindnet-dlvx8                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      49s
	  kube-system                 kube-apiserver-pause-649359             250m (12%)    0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-controller-manager-pause-649359    200m (10%)    0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-proxy-4p5n9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 kube-scheduler-pause-649359             100m (5%)     0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 19s                kube-proxy       
	  Normal   Starting                 47s                kube-proxy       
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node pause-649359 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node pause-649359 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node pause-649359 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 54s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 54s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  54s                kubelet          Node pause-649359 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    54s                kubelet          Node pause-649359 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     54s                kubelet          Node pause-649359 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                node-controller  Node pause-649359 event: Registered Node pause-649359 in Controller
	  Normal   CIDRAssignmentFailed     50s                cidrAllocator    Node pause-649359 status is now: CIDRAssignmentFailed
	  Normal   NodeReady                36s                kubelet          Node pause-649359 status is now: NodeReady
	  Normal   RegisteredNode           18s                node-controller  Node pause-649359 event: Registered Node pause-649359 in Controller
	
	
	==> dmesg <==
	[  +3.372319] overlayfs: idmapped layers are currently not supported
	[ +34.539888] overlayfs: idmapped layers are currently not supported
	[Dec13 11:12] overlayfs: idmapped layers are currently not supported
	[Dec13 11:13] overlayfs: idmapped layers are currently not supported
	[  +3.803792] overlayfs: idmapped layers are currently not supported
	[Dec13 11:14] overlayfs: idmapped layers are currently not supported
	[ +27.964028] overlayfs: idmapped layers are currently not supported
	[Dec13 11:16] overlayfs: idmapped layers are currently not supported
	[Dec13 11:20] overlayfs: idmapped layers are currently not supported
	[ +35.182226] overlayfs: idmapped layers are currently not supported
	[Dec13 11:21] overlayfs: idmapped layers are currently not supported
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1d4978198e9f7a6747ea2655f6b6de52519dd4ed62f82a58b2e2c4e1ef98bbbe] <==
	{"level":"warn","ts":"2025-12-13T11:44:26.608078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.641389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.692868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.714549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.760559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.793363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.835702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.888664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.903606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.935896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.963454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.981067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.992994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.013942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.064206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.074635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.092177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.102569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.125755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.140281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.163183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.190905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.213536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.235199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.307604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54000","server-name":"","error":"EOF"}
	
	
	==> etcd [e7c15552f9059fa83530a9bacc61ee1252c8b4a381d93663a6f446492911bbf0] <==
	{"level":"warn","ts":"2025-12-13T11:43:50.444908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:43:50.466506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:43:50.487555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:43:50.548074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:43:50.549000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:43:50.568774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:43:50.663404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40314","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T11:44:16.662303Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T11:44:16.662356Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-649359","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-13T11:44:16.662469Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T11:44:16.933349Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T11:44:16.933434Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T11:44:16.933488Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-13T11:44:16.933564Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-13T11:44:16.933601Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-13T11:44:16.933609Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T11:44:16.933665Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T11:44:16.933687Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T11:44:16.933695Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-13T11:44:16.933662Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T11:44:16.933708Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T11:44:16.936981Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-13T11:44:16.937066Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T11:44:16.937103Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-13T11:44:16.937113Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-649359","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 11:44:48 up  3:27,  0 user,  load average: 2.19, 1.70, 1.83
	Linux pause-649359 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2a7de57d4a05a6dbf2502d22d4bbcaa154c070ec7964da22579d06bac5b9eb78] <==
	I1213 11:44:02.129355       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 11:44:02.130618       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 11:44:02.130795       1 main.go:148] setting mtu 1500 for CNI 
	I1213 11:44:02.130835       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 11:44:02.130874       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T11:44:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 11:44:02.335087       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 11:44:02.419589       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 11:44:02.419721       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 11:44:02.420671       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 11:44:02.620107       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 11:44:02.620205       1 metrics.go:72] Registering metrics
	I1213 11:44:02.620303       1 controller.go:711] "Syncing nftables rules"
	I1213 11:44:12.339596       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:44:12.339653       1 main.go:301] handling current node
	
	
	==> kindnet [44ffe7a6e162c4fee0b1aa729d805e20f9e028840e4af6b9ad5c9a656939c373] <==
	I1213 11:44:25.024764       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 11:44:25.027628       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 11:44:25.028226       1 main.go:148] setting mtu 1500 for CNI 
	I1213 11:44:25.029924       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 11:44:25.029985       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T11:44:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 11:44:25.232494       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 11:44:25.232576       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 11:44:25.232611       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 11:44:25.232807       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 11:44:28.535042       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 11:44:28.535153       1 metrics.go:72] Registering metrics
	I1213 11:44:28.535257       1 controller.go:711] "Syncing nftables rules"
	I1213 11:44:35.227730       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:44:35.227887       1 main.go:301] handling current node
	I1213 11:44:45.227696       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:44:45.227812       1 main.go:301] handling current node
	
	
	==> kube-apiserver [84a10b3ea40e6601774b3e6db2ab95691ab784e51dfcf7098ff258d7a9b6e9c8] <==
	I1213 11:44:28.418690       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 11:44:28.447829       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1213 11:44:28.447871       1 policy_source.go:240] refreshing policies
	I1213 11:44:28.447910       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1213 11:44:28.447954       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1213 11:44:28.447982       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 11:44:28.447996       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 11:44:28.448112       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 11:44:28.465823       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1213 11:44:28.486314       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 11:44:28.498726       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 11:44:28.522610       1 cache.go:39] Caches are synced for autoregister controller
	I1213 11:44:28.522806       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 11:44:28.502643       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 11:44:28.523140       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 11:44:28.530748       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 11:44:28.531164       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 11:44:28.542934       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1213 11:44:28.552614       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1213 11:44:28.998753       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 11:44:29.409718       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 11:44:30.873800       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 11:44:31.072816       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 11:44:31.122831       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 11:44:31.174936       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [9a3d70eeb43c6a48062b267c0c45c9e29ff5980c55cb4522b094c5996bcc7629] <==
	W1213 11:44:16.690104       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.690564       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.690642       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.690698       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.690752       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.690804       1 logging.go:55] [core] [Channel #17 SubChannel #22]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.690853       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.690909       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.690958       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691027       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691082       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691134       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691186       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691422       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691507       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691585       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691649       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691708       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691761       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691814       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691870       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691923       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.692120       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.692575       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [69ba67493b5e941069b4105a9966d8e72af04aa74f3451c67b59e1792d39f2a7] <==
	I1213 11:44:30.784622       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 11:44:30.788835       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1213 11:44:30.792110       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 11:44:30.794354       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 11:44:30.797541       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1213 11:44:30.797626       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1213 11:44:30.797696       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-649359"
	I1213 11:44:30.797743       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1213 11:44:30.800064       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 11:44:30.802286       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 11:44:30.803507       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 11:44:30.808938       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 11:44:30.817133       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 11:44:30.817151       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 11:44:30.817170       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 11:44:30.817187       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1213 11:44:30.820407       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 11:44:30.821464       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 11:44:30.825918       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 11:44:30.828072       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 11:44:30.830352       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 11:44:30.836658       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 11:44:30.844861       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 11:44:30.844888       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 11:44:30.844895       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [efb2aed1c0f23cd8fd9bd689dbbbcc305a654f6d718d16aded0b188a0f85575e] <==
	I1213 11:43:58.635756       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1213 11:43:58.635786       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1213 11:43:58.635813       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 11:43:58.635873       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1213 11:43:58.636195       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 11:43:58.636255       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 11:43:58.636332       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 11:43:58.640005       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 11:43:58.640083       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 11:43:58.640269       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 11:43:58.640546       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 11:43:58.640573       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1213 11:43:58.640623       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 11:43:58.642690       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1213 11:43:58.642805       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1213 11:43:58.642867       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1213 11:43:58.642896       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1213 11:43:58.642924       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1213 11:43:58.647851       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 11:43:58.656401       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1213 11:43:58.659003       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-649359" podCIDRs=["10.244.0.0/24"]
	E1213 11:43:58.673094       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"pause-649359\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.1.0/24\",\"10.244.0.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="pause-649359" podCIDRs=["10.244.1.0/24"]
	E1213 11:43:58.673154       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"pause-649359\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.1.0/24\",\"10.244.0.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="pause-649359"
	E1213 11:43:58.673195       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'pause-649359': failed to patch node CIDR: Node \"pause-649359\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.1.0/24\",\"10.244.0.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I1213 11:44:13.664518       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a8e21604e02771850162284cea6434cc4c66c53872a261c1d6c46f67be07725e] <==
	I1213 11:44:25.778028       1 server_linux.go:53] "Using iptables proxy"
	I1213 11:44:26.524609       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 11:44:28.611552       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 11:44:28.619563       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1213 11:44:28.624337       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 11:44:28.733708       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 11:44:28.733769       1 server_linux.go:132] "Using iptables Proxier"
	I1213 11:44:28.761239       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 11:44:28.761681       1 server.go:527] "Version info" version="v1.34.2"
	I1213 11:44:28.761999       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:44:28.763474       1 config.go:200] "Starting service config controller"
	I1213 11:44:28.791035       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 11:44:28.769257       1 config.go:106] "Starting endpoint slice config controller"
	I1213 11:44:28.791250       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 11:44:28.769288       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 11:44:28.791349       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 11:44:28.790882       1 config.go:309] "Starting node config controller"
	I1213 11:44:28.791414       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 11:44:28.791561       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 11:44:28.891463       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 11:44:28.891589       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 11:44:28.891605       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b42074f6e347ec4d0c964e8198f85e6532b0014bb1e60750a212fd0a9a4a62a1] <==
	I1213 11:43:59.868699       1 server_linux.go:53] "Using iptables proxy"
	I1213 11:43:59.969032       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 11:44:00.084166       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 11:44:00.084237       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1213 11:44:00.084375       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 11:44:00.374702       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 11:44:00.374780       1 server_linux.go:132] "Using iptables Proxier"
	I1213 11:44:00.450080       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 11:44:00.450486       1 server.go:527] "Version info" version="v1.34.2"
	I1213 11:44:00.450501       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:44:00.458151       1 config.go:200] "Starting service config controller"
	I1213 11:44:00.458184       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 11:44:00.458217       1 config.go:106] "Starting endpoint slice config controller"
	I1213 11:44:00.458222       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 11:44:00.458236       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 11:44:00.458240       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 11:44:00.459001       1 config.go:309] "Starting node config controller"
	I1213 11:44:00.459021       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 11:44:00.459029       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 11:44:00.559982       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 11:44:00.560020       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 11:44:00.560065       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2000805fb9746ade340bf4b45bcf2b3c8530d52c36d96e3640838f94c9200163] <==
	I1213 11:44:28.085689       1 serving.go:386] Generated self-signed cert in-memory
	I1213 11:44:29.044078       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 11:44:29.044188       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:44:29.066735       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 11:44:29.066823       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1213 11:44:29.066852       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1213 11:44:29.066874       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 11:44:29.083396       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:44:29.083428       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:44:29.083446       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 11:44:29.083451       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 11:44:29.167299       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1213 11:44:29.183963       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 11:44:29.184090       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [6c2fc3a72c623073db614eabbbfa9a65bc0404d50c8390c5b66c60b9c9862e42] <==
	E1213 11:43:52.718195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 11:43:52.718307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 11:43:52.718345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 11:43:52.718375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 11:43:52.721016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 11:43:52.721141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 11:43:52.721248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 11:43:52.721541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 11:43:52.721653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 11:43:52.721760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 11:43:52.721864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 11:43:52.721957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 11:43:52.722188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 11:43:52.722320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 11:43:52.722424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 11:43:52.723168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 11:43:52.724770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 11:43:52.724775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1213 11:43:54.211090       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:44:16.670556       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 11:44:16.670585       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 11:44:16.670608       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 11:44:16.670635       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:44:16.670972       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 11:44:16.670996       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.617887    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="30e3c7d20b09ce02b630882db0497c98" pod="kube-system/kube-scheduler-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.618064    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="205e683250695d1a163b559880967ff2" pod="kube-system/etcd-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.618219    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="40b13f97116b19b789e50b2d595988bc" pod="kube-system/kube-apiserver-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.618368    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="119a20aa2837a3826d8f18f6cb4520f6" pod="kube-system/kube-controller-manager-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.618520    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4p5n9\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="be1a6262-3cc1-43f5-8671-3e19f21ba33e" pod="kube-system/kube-proxy-4p5n9"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: I1213 11:44:24.620947    1331 scope.go:117] "RemoveContainer" containerID="2a7de57d4a05a6dbf2502d22d4bbcaa154c070ec7964da22579d06bac5b9eb78"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.621688    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="119a20aa2837a3826d8f18f6cb4520f6" pod="kube-system/kube-controller-manager-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.621984    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-dlvx8\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ea100ffe-c03c-495d-bb02-d7340382cb8b" pod="kube-system/kindnet-dlvx8"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.622255    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4p5n9\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="be1a6262-3cc1-43f5-8671-3e19f21ba33e" pod="kube-system/kube-proxy-4p5n9"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.622515    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="30e3c7d20b09ce02b630882db0497c98" pod="kube-system/kube-scheduler-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.622773    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="205e683250695d1a163b559880967ff2" pod="kube-system/etcd-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.623298    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="40b13f97116b19b789e50b2d595988bc" pod="kube-system/kube-apiserver-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: I1213 11:44:24.645130    1331 scope.go:117] "RemoveContainer" containerID="e1b0b540fa92caa18713aeeae9ab47a45eaf1e6ff2bb9db8d90bdbafb6ac6006"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.645556    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-g2449\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e851be5d-0744-4a63-8a57-05546a6999f2" pod="kube-system/coredns-66bc5c9577-g2449"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.645850    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="30e3c7d20b09ce02b630882db0497c98" pod="kube-system/kube-scheduler-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.646125    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="205e683250695d1a163b559880967ff2" pod="kube-system/etcd-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.646455    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="40b13f97116b19b789e50b2d595988bc" pod="kube-system/kube-apiserver-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.646769    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="119a20aa2837a3826d8f18f6cb4520f6" pod="kube-system/kube-controller-manager-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.647084    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-dlvx8\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ea100ffe-c03c-495d-bb02-d7340382cb8b" pod="kube-system/kindnet-dlvx8"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.647375    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4p5n9\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="be1a6262-3cc1-43f5-8671-3e19f21ba33e" pod="kube-system/kube-proxy-4p5n9"
	Dec 13 11:44:34 pause-649359 kubelet[1331]: W1213 11:44:34.383620    1331 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 13 11:44:44 pause-649359 kubelet[1331]: W1213 11:44:44.398372    1331 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 13 11:44:45 pause-649359 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 11:44:45 pause-649359 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 11:44:45 pause-649359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-649359 -n pause-649359
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-649359 -n pause-649359: exit status 2 (357.813153ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-649359 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-649359
helpers_test.go:244: (dbg) docker inspect pause-649359:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99b47e81a3f13292218dfd0776f61df09821bc28964319588da9055bed7d0bb5",
	        "Created": "2025-12-13T11:43:25.734543259Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 565518,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:43:25.802426158Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/99b47e81a3f13292218dfd0776f61df09821bc28964319588da9055bed7d0bb5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99b47e81a3f13292218dfd0776f61df09821bc28964319588da9055bed7d0bb5/hostname",
	        "HostsPath": "/var/lib/docker/containers/99b47e81a3f13292218dfd0776f61df09821bc28964319588da9055bed7d0bb5/hosts",
	        "LogPath": "/var/lib/docker/containers/99b47e81a3f13292218dfd0776f61df09821bc28964319588da9055bed7d0bb5/99b47e81a3f13292218dfd0776f61df09821bc28964319588da9055bed7d0bb5-json.log",
	        "Name": "/pause-649359",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-649359:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-649359",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99b47e81a3f13292218dfd0776f61df09821bc28964319588da9055bed7d0bb5",
	                "LowerDir": "/var/lib/docker/overlay2/d82b2d3bc4c6388d5608e2ed10fce3f78764b8e21901ca46de7566feed89499d-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d82b2d3bc4c6388d5608e2ed10fce3f78764b8e21901ca46de7566feed89499d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d82b2d3bc4c6388d5608e2ed10fce3f78764b8e21901ca46de7566feed89499d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d82b2d3bc4c6388d5608e2ed10fce3f78764b8e21901ca46de7566feed89499d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-649359",
	                "Source": "/var/lib/docker/volumes/pause-649359/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-649359",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-649359",
	                "name.minikube.sigs.k8s.io": "pause-649359",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aef66c83926f666d2d814ff6f1bfd1902165a26a0c4418b457f823e86fd248bb",
	            "SandboxKey": "/var/run/docker/netns/aef66c83926f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33403"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33404"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-649359": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:e5:d4:93:f3:b8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1ab2f13bbcc6d97c28d99aa61549f171d25dc9750145fceb3b1056f211a386c9",
	                    "EndpointID": "ecef122a174d39f6ce22594f34fdba1c9d4e77b64ddc89f14f9ac9e8abc0a3ba",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-649359",
	                        "99b47e81a3f1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-649359 -n pause-649359
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-649359 -n pause-649359: exit status 2 (353.894298ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-649359 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-649359 logs -n 25: (1.403020806s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-627673 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-627673       │ jenkins │ v1.37.0 │ 13 Dec 25 11:31 UTC │ 13 Dec 25 11:31 UTC │
	│ start   │ -p missing-upgrade-438132 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-438132    │ jenkins │ v1.35.0 │ 13 Dec 25 11:31 UTC │ 13 Dec 25 11:32 UTC │
	│ start   │ -p NoKubernetes-627673 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-627673       │ jenkins │ v1.37.0 │ 13 Dec 25 11:31 UTC │ 13 Dec 25 11:32 UTC │
	│ start   │ -p missing-upgrade-438132 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-438132    │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │ 13 Dec 25 11:33 UTC │
	│ delete  │ -p NoKubernetes-627673                                                                                                                          │ NoKubernetes-627673       │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │ 13 Dec 25 11:32 UTC │
	│ start   │ -p NoKubernetes-627673 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-627673       │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │ 13 Dec 25 11:32 UTC │
	│ ssh     │ -p NoKubernetes-627673 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-627673       │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ stop    │ -p NoKubernetes-627673                                                                                                                          │ NoKubernetes-627673       │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │ 13 Dec 25 11:32 UTC │
	│ start   │ -p NoKubernetes-627673 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-627673       │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │ 13 Dec 25 11:32 UTC │
	│ ssh     │ -p NoKubernetes-627673 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-627673       │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │                     │
	│ delete  │ -p NoKubernetes-627673                                                                                                                          │ NoKubernetes-627673       │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │ 13 Dec 25 11:32 UTC │
	│ start   │ -p kubernetes-upgrade-854588 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-854588 │ jenkins │ v1.37.0 │ 13 Dec 25 11:32 UTC │ 13 Dec 25 11:33 UTC │
	│ delete  │ -p missing-upgrade-438132                                                                                                                       │ missing-upgrade-438132    │ jenkins │ v1.37.0 │ 13 Dec 25 11:33 UTC │ 13 Dec 25 11:33 UTC │
	│ start   │ -p stopped-upgrade-558323 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-558323    │ jenkins │ v1.35.0 │ 13 Dec 25 11:33 UTC │ 13 Dec 25 11:33 UTC │
	│ stop    │ -p kubernetes-upgrade-854588                                                                                                                    │ kubernetes-upgrade-854588 │ jenkins │ v1.37.0 │ 13 Dec 25 11:33 UTC │ 13 Dec 25 11:33 UTC │
	│ start   │ -p kubernetes-upgrade-854588 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-854588 │ jenkins │ v1.37.0 │ 13 Dec 25 11:33 UTC │                     │
	│ stop    │ stopped-upgrade-558323 stop                                                                                                                     │ stopped-upgrade-558323    │ jenkins │ v1.35.0 │ 13 Dec 25 11:33 UTC │ 13 Dec 25 11:33 UTC │
	│ start   │ -p stopped-upgrade-558323 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-558323    │ jenkins │ v1.37.0 │ 13 Dec 25 11:33 UTC │ 13 Dec 25 11:38 UTC │
	│ delete  │ -p stopped-upgrade-558323                                                                                                                       │ stopped-upgrade-558323    │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:38 UTC │
	│ start   │ -p running-upgrade-686784 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-686784    │ jenkins │ v1.35.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:38 UTC │
	│ start   │ -p running-upgrade-686784 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-686784    │ jenkins │ v1.37.0 │ 13 Dec 25 11:38 UTC │ 13 Dec 25 11:43 UTC │
	│ delete  │ -p running-upgrade-686784                                                                                                                       │ running-upgrade-686784    │ jenkins │ v1.37.0 │ 13 Dec 25 11:43 UTC │ 13 Dec 25 11:43 UTC │
	│ start   │ -p pause-649359 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-649359              │ jenkins │ v1.37.0 │ 13 Dec 25 11:43 UTC │ 13 Dec 25 11:44 UTC │
	│ start   │ -p pause-649359 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-649359              │ jenkins │ v1.37.0 │ 13 Dec 25 11:44 UTC │ 13 Dec 25 11:44 UTC │
	│ pause   │ -p pause-649359 --alsologtostderr -v=5                                                                                                          │ pause-649359              │ jenkins │ v1.37.0 │ 13 Dec 25 11:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:44:15
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:44:15.332656  567960 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:44:15.332895  567960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:44:15.332925  567960 out.go:374] Setting ErrFile to fd 2...
	I1213 11:44:15.332951  567960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:44:15.333356  567960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:44:15.333943  567960 out.go:368] Setting JSON to false
	I1213 11:44:15.335296  567960 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12408,"bootTime":1765613848,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:44:15.335420  567960 start.go:143] virtualization:  
	I1213 11:44:15.338696  567960 out.go:179] * [pause-649359] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:44:15.342647  567960 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:44:15.342748  567960 notify.go:221] Checking for updates...
	I1213 11:44:15.348613  567960 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:44:15.351881  567960 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:44:15.354963  567960 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:44:15.357836  567960 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:44:15.360819  567960 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:44:15.364271  567960 config.go:182] Loaded profile config "pause-649359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:44:15.364857  567960 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:44:15.395677  567960 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:44:15.395862  567960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:44:15.459601  567960 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 11:44:15.450151428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:44:15.459719  567960 docker.go:319] overlay module found
	I1213 11:44:15.462884  567960 out.go:179] * Using the docker driver based on existing profile
	I1213 11:44:15.465758  567960 start.go:309] selected driver: docker
	I1213 11:44:15.465782  567960 start.go:927] validating driver "docker" against &{Name:pause-649359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-649359 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:44:15.465927  567960 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:44:15.466041  567960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:44:15.532521  567960 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 11:44:15.515810008 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:44:15.532968  567960 cni.go:84] Creating CNI manager for ""
	I1213 11:44:15.533040  567960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:44:15.533093  567960 start.go:353] cluster config:
	{Name:pause-649359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-649359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:44:15.537457  567960 out.go:179] * Starting "pause-649359" primary control-plane node in "pause-649359" cluster
	I1213 11:44:15.540487  567960 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:44:15.543468  567960 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:44:15.546434  567960 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 11:44:15.546490  567960 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 11:44:15.546510  567960 cache.go:65] Caching tarball of preloaded images
	I1213 11:44:15.546520  567960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:44:15.546628  567960 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 11:44:15.546639  567960 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 11:44:15.546796  567960 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/config.json ...
	I1213 11:44:15.572105  567960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:44:15.572124  567960 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:44:15.572139  567960 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:44:15.572172  567960 start.go:360] acquireMachinesLock for pause-649359: {Name:mk9590dc8cde3ee1d19bd97e7fbcc07ea89a081b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:44:15.572225  567960 start.go:364] duration metric: took 36.029µs to acquireMachinesLock for "pause-649359"
	I1213 11:44:15.572245  567960 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:44:15.572250  567960 fix.go:54] fixHost starting: 
	I1213 11:44:15.572523  567960 cli_runner.go:164] Run: docker container inspect pause-649359 --format={{.State.Status}}
	I1213 11:44:15.600325  567960 fix.go:112] recreateIfNeeded on pause-649359: state=Running err=<nil>
	W1213 11:44:15.600356  567960 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:44:15.603600  567960 out.go:252] * Updating the running docker "pause-649359" container ...
	I1213 11:44:15.603640  567960 machine.go:94] provisionDockerMachine start ...
	I1213 11:44:15.603730  567960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-649359
	I1213 11:44:15.621207  567960 main.go:143] libmachine: Using SSH client type: native
	I1213 11:44:15.621533  567960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1213 11:44:15.621549  567960 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:44:15.771262  567960 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-649359
	
	I1213 11:44:15.771286  567960 ubuntu.go:182] provisioning hostname "pause-649359"
	I1213 11:44:15.771356  567960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-649359
	I1213 11:44:15.789364  567960 main.go:143] libmachine: Using SSH client type: native
	I1213 11:44:15.789691  567960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1213 11:44:15.789706  567960 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-649359 && echo "pause-649359" | sudo tee /etc/hostname
	I1213 11:44:15.952187  567960 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-649359
	
	I1213 11:44:15.952263  567960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-649359
	I1213 11:44:15.968857  567960 main.go:143] libmachine: Using SSH client type: native
	I1213 11:44:15.969165  567960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1213 11:44:15.969189  567960 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-649359' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-649359/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-649359' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:44:16.129494  567960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:44:16.129528  567960 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 11:44:16.129553  567960 ubuntu.go:190] setting up certificates
	I1213 11:44:16.129563  567960 provision.go:84] configureAuth start
	I1213 11:44:16.129650  567960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-649359
	I1213 11:44:16.153573  567960 provision.go:143] copyHostCerts
	I1213 11:44:16.153647  567960 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 11:44:16.153659  567960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 11:44:16.153736  567960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 11:44:16.153847  567960 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 11:44:16.153859  567960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 11:44:16.153887  567960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 11:44:16.153955  567960 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 11:44:16.153966  567960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 11:44:16.153990  567960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 11:44:16.154048  567960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.pause-649359 san=[127.0.0.1 192.168.85.2 localhost minikube pause-649359]
	I1213 11:44:16.291794  567960 provision.go:177] copyRemoteCerts
	I1213 11:44:16.291870  567960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:44:16.291911  567960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-649359
	I1213 11:44:16.313250  567960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/pause-649359/id_rsa Username:docker}
	I1213 11:44:16.419655  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:44:16.437800  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 11:44:16.455492  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:44:16.472567  567960 provision.go:87] duration metric: took 342.972977ms to configureAuth
	I1213 11:44:16.472598  567960 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:44:16.472831  567960 config.go:182] Loaded profile config "pause-649359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:44:16.472947  567960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-649359
	I1213 11:44:16.490587  567960 main.go:143] libmachine: Using SSH client type: native
	I1213 11:44:16.490914  567960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1213 11:44:16.490935  567960 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:44:21.872109  567960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:44:21.872130  567960 machine.go:97] duration metric: took 6.268481319s to provisionDockerMachine
	I1213 11:44:21.872142  567960 start.go:293] postStartSetup for "pause-649359" (driver="docker")
	I1213 11:44:21.872152  567960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:44:21.872235  567960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:44:21.872310  567960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-649359
	I1213 11:44:21.891605  567960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/pause-649359/id_rsa Username:docker}
	I1213 11:44:22.001001  567960 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:44:22.009231  567960 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:44:22.009265  567960 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:44:22.009279  567960 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 11:44:22.009348  567960 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 11:44:22.009446  567960 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 11:44:22.009563  567960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:44:22.018693  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:44:22.037698  567960 start.go:296] duration metric: took 165.539838ms for postStartSetup
	I1213 11:44:22.037786  567960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:44:22.037844  567960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-649359
	I1213 11:44:22.055901  567960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/pause-649359/id_rsa Username:docker}
	I1213 11:44:22.156785  567960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:44:22.161855  567960 fix.go:56] duration metric: took 6.589597447s for fixHost
	I1213 11:44:22.161881  567960 start.go:83] releasing machines lock for "pause-649359", held for 6.589646103s
	I1213 11:44:22.161959  567960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-649359
	I1213 11:44:22.178625  567960 ssh_runner.go:195] Run: cat /version.json
	I1213 11:44:22.178658  567960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:44:22.178676  567960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-649359
	I1213 11:44:22.178713  567960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-649359
	I1213 11:44:22.201640  567960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/pause-649359/id_rsa Username:docker}
	I1213 11:44:22.204101  567960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/pause-649359/id_rsa Username:docker}
	I1213 11:44:22.303893  567960 ssh_runner.go:195] Run: systemctl --version
	I1213 11:44:22.397250  567960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:44:22.438251  567960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:44:22.442803  567960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:44:22.442893  567960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:44:22.450995  567960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 11:44:22.451019  567960 start.go:496] detecting cgroup driver to use...
	I1213 11:44:22.451049  567960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:44:22.451095  567960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:44:22.467266  567960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:44:22.480781  567960 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:44:22.480881  567960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:44:22.495921  567960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:44:22.509929  567960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:44:22.644567  567960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:44:22.781658  567960 docker.go:234] disabling docker service ...
	I1213 11:44:22.781802  567960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:44:22.796700  567960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:44:22.810762  567960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:44:22.971250  567960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:44:23.131932  567960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:44:23.144718  567960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:44:23.158651  567960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 11:44:23.158723  567960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:44:23.167816  567960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:44:23.167940  567960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:44:23.176293  567960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:44:23.184963  567960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:44:23.193556  567960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:44:23.201597  567960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:44:23.209940  567960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:44:23.218758  567960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:44:23.227368  567960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:44:23.234924  567960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:44:23.242088  567960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:44:23.370933  567960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:44:23.561204  567960 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:44:23.561288  567960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:44:23.565075  567960 start.go:564] Will wait 60s for crictl version
	I1213 11:44:23.565157  567960 ssh_runner.go:195] Run: which crictl
	I1213 11:44:23.568621  567960 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:44:23.596492  567960 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:44:23.596594  567960 ssh_runner.go:195] Run: crio --version
	I1213 11:44:23.627483  567960 ssh_runner.go:195] Run: crio --version
	I1213 11:44:23.658992  567960 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 11:44:23.661841  567960 cli_runner.go:164] Run: docker network inspect pause-649359 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:44:23.677986  567960 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 11:44:23.681707  567960 kubeadm.go:884] updating cluster {Name:pause-649359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-649359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:44:23.681855  567960 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 11:44:23.681913  567960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:44:23.716376  567960 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:44:23.716402  567960 crio.go:433] Images already preloaded, skipping extraction
	I1213 11:44:23.716456  567960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:44:23.741340  567960 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:44:23.741363  567960 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:44:23.741371  567960 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1213 11:44:23.741470  567960 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-649359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-649359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:44:23.741550  567960 ssh_runner.go:195] Run: crio config
	I1213 11:44:23.805903  567960 cni.go:84] Creating CNI manager for ""
	I1213 11:44:23.805927  567960 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:44:23.805948  567960 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:44:23.805970  567960 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-649359 NodeName:pause-649359 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:44:23.806100  567960 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-649359"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:44:23.806178  567960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 11:44:23.815367  567960 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:44:23.815444  567960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:44:23.824054  567960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1213 11:44:23.839983  567960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:44:23.852699  567960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1213 11:44:23.865638  567960 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:44:23.869807  567960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:44:23.997053  567960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:44:24.016509  567960 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359 for IP: 192.168.85.2
	I1213 11:44:24.016533  567960 certs.go:195] generating shared ca certs ...
	I1213 11:44:24.016550  567960 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:44:24.016769  567960 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:44:24.016844  567960 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:44:24.016859  567960 certs.go:257] generating profile certs ...
	I1213 11:44:24.016986  567960 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/client.key
	I1213 11:44:24.017108  567960 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/apiserver.key.afbfc0e7
	I1213 11:44:24.017192  567960 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/proxy-client.key
	I1213 11:44:24.017339  567960 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:44:24.017399  567960 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:44:24.017415  567960 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:44:24.017460  567960 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:44:24.017516  567960 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:44:24.017555  567960 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:44:24.017642  567960 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:44:24.018305  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:44:24.037932  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:44:24.057937  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:44:24.076620  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:44:24.097121  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 11:44:24.115401  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:44:24.134375  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:44:24.153524  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:44:24.175310  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:44:24.194239  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:44:24.212849  567960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:44:24.230874  567960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:44:24.244380  567960 ssh_runner.go:195] Run: openssl version
	I1213 11:44:24.250729  567960 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:44:24.258574  567960 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:44:24.266718  567960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:44:24.270585  567960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:44:24.270655  567960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:44:24.312210  567960 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:44:24.320083  567960 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:44:24.328244  567960 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:44:24.336362  567960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:44:24.340370  567960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:44:24.340436  567960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:44:24.383264  567960 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:44:24.391563  567960 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:44:24.399492  567960 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:44:24.407768  567960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:44:24.412072  567960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:44:24.412178  567960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:44:24.453922  567960 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:44:24.461701  567960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:44:24.465631  567960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:44:24.513241  567960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:44:24.595357  567960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:44:24.681867  567960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:44:24.792030  567960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:44:24.908306  567960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:44:24.980068  567960 kubeadm.go:401] StartCluster: {Name:pause-649359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-649359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:44:24.980251  567960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:44:24.980345  567960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:44:25.026336  567960 cri.go:89] found id: "1d4978198e9f7a6747ea2655f6b6de52519dd4ed62f82a58b2e2c4e1ef98bbbe"
	I1213 11:44:25.026411  567960 cri.go:89] found id: "77ddb3b808ba3a3c66122b470193d17e950914cc192aeaa1ce537982978c176c"
	I1213 11:44:25.026446  567960 cri.go:89] found id: "44ffe7a6e162c4fee0b1aa729d805e20f9e028840e4af6b9ad5c9a656939c373"
	I1213 11:44:25.026482  567960 cri.go:89] found id: "69ba67493b5e941069b4105a9966d8e72af04aa74f3451c67b59e1792d39f2a7"
	I1213 11:44:25.026501  567960 cri.go:89] found id: "84a10b3ea40e6601774b3e6db2ab95691ab784e51dfcf7098ff258d7a9b6e9c8"
	I1213 11:44:25.026536  567960 cri.go:89] found id: "2000805fb9746ade340bf4b45bcf2b3c8530d52c36d96e3640838f94c9200163"
	I1213 11:44:25.026559  567960 cri.go:89] found id: "e1b0b540fa92caa18713aeeae9ab47a45eaf1e6ff2bb9db8d90bdbafb6ac6006"
	I1213 11:44:25.026580  567960 cri.go:89] found id: "2a7de57d4a05a6dbf2502d22d4bbcaa154c070ec7964da22579d06bac5b9eb78"
	I1213 11:44:25.026613  567960 cri.go:89] found id: "b42074f6e347ec4d0c964e8198f85e6532b0014bb1e60750a212fd0a9a4a62a1"
	I1213 11:44:25.026650  567960 cri.go:89] found id: "9a3d70eeb43c6a48062b267c0c45c9e29ff5980c55cb4522b094c5996bcc7629"
	I1213 11:44:25.026669  567960 cri.go:89] found id: "6c2fc3a72c623073db614eabbbfa9a65bc0404d50c8390c5b66c60b9c9862e42"
	I1213 11:44:25.026702  567960 cri.go:89] found id: "efb2aed1c0f23cd8fd9bd689dbbbcc305a654f6d718d16aded0b188a0f85575e"
	I1213 11:44:25.026726  567960 cri.go:89] found id: "e7c15552f9059fa83530a9bacc61ee1252c8b4a381d93663a6f446492911bbf0"
	I1213 11:44:25.026746  567960 cri.go:89] found id: ""
	I1213 11:44:25.026832  567960 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 11:44:25.044174  567960 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:44:25Z" level=error msg="open /run/runc: no such file or directory"
	I1213 11:44:25.044247  567960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:44:25.068274  567960 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 11:44:25.068295  567960 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 11:44:25.068370  567960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:44:25.079797  567960 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:44:25.080652  567960 kubeconfig.go:125] found "pause-649359" server: "https://192.168.85.2:8443"
	I1213 11:44:25.085071  567960 kapi.go:59] client config for pause-649359: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/client.key", CAFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 11:44:25.094686  567960 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 11:44:25.094929  567960 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 11:44:25.094962  567960 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 11:44:25.094983  567960 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 11:44:25.095017  567960 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 11:44:25.096820  567960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:44:25.108471  567960 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 11:44:25.108551  567960 kubeadm.go:602] duration metric: took 40.249258ms to restartPrimaryControlPlane
	I1213 11:44:25.108576  567960 kubeadm.go:403] duration metric: took 128.516358ms to StartCluster
	I1213 11:44:25.108627  567960 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:44:25.108725  567960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:44:25.109742  567960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:44:25.110048  567960 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:44:25.110446  567960 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:44:25.110964  567960 config.go:182] Loaded profile config "pause-649359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:44:25.116451  567960 out.go:179] * Enabled addons: 
	I1213 11:44:25.116587  567960 out.go:179] * Verifying Kubernetes components...
	I1213 11:44:25.119421  567960 addons.go:530] duration metric: took 8.973209ms for enable addons: enabled=[]
	I1213 11:44:25.119479  567960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:44:25.415374  567960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:44:25.437526  567960 node_ready.go:35] waiting up to 6m0s for node "pause-649359" to be "Ready" ...
	I1213 11:44:28.437529  567960 node_ready.go:49] node "pause-649359" is "Ready"
	I1213 11:44:28.437610  567960 node_ready.go:38] duration metric: took 3.000047198s for node "pause-649359" to be "Ready" ...
	I1213 11:44:28.437640  567960 api_server.go:52] waiting for apiserver process to appear ...
	I1213 11:44:28.437736  567960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:44:28.449599  567960 api_server.go:72] duration metric: took 3.339481227s to wait for apiserver process to appear ...
	I1213 11:44:28.449681  567960 api_server.go:88] waiting for apiserver healthz status ...
	I1213 11:44:28.449734  567960 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 11:44:28.496200  567960 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 11:44:28.496285  567960 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 11:44:28.949856  567960 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 11:44:28.960439  567960 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 11:44:28.960519  567960 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 11:44:29.449864  567960 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 11:44:29.464235  567960 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1213 11:44:29.466391  567960 api_server.go:141] control plane version: v1.34.2
	I1213 11:44:29.466470  567960 api_server.go:131] duration metric: took 1.016751508s to wait for apiserver health ...
	I1213 11:44:29.466495  567960 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 11:44:29.471392  567960 system_pods.go:59] 7 kube-system pods found
	I1213 11:44:29.471439  567960 system_pods.go:61] "coredns-66bc5c9577-g2449" [e851be5d-0744-4a63-8a57-05546a6999f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:44:29.471450  567960 system_pods.go:61] "etcd-pause-649359" [8bec0f3b-3cfd-49d4-a9b4-1b5899a24d2a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 11:44:29.471456  567960 system_pods.go:61] "kindnet-dlvx8" [ea100ffe-c03c-495d-bb02-d7340382cb8b] Running
	I1213 11:44:29.471464  567960 system_pods.go:61] "kube-apiserver-pause-649359" [b7810941-e870-480f-be70-6b27f530961c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 11:44:29.471471  567960 system_pods.go:61] "kube-controller-manager-pause-649359" [6bc3d848-1c2b-4fd9-b647-91ac5edd2968] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 11:44:29.471480  567960 system_pods.go:61] "kube-proxy-4p5n9" [be1a6262-3cc1-43f5-8671-3e19f21ba33e] Running
	I1213 11:44:29.471486  567960 system_pods.go:61] "kube-scheduler-pause-649359" [47261308-7ddf-41ea-a700-278e13986378] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 11:44:29.471492  567960 system_pods.go:74] duration metric: took 4.977192ms to wait for pod list to return data ...
	I1213 11:44:29.471508  567960 default_sa.go:34] waiting for default service account to be created ...
	I1213 11:44:29.473995  567960 default_sa.go:45] found service account: "default"
	I1213 11:44:29.474068  567960 default_sa.go:55] duration metric: took 2.527365ms for default service account to be created ...
	I1213 11:44:29.474108  567960 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 11:44:29.478462  567960 system_pods.go:86] 7 kube-system pods found
	I1213 11:44:29.478499  567960 system_pods.go:89] "coredns-66bc5c9577-g2449" [e851be5d-0744-4a63-8a57-05546a6999f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:44:29.478510  567960 system_pods.go:89] "etcd-pause-649359" [8bec0f3b-3cfd-49d4-a9b4-1b5899a24d2a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 11:44:29.478516  567960 system_pods.go:89] "kindnet-dlvx8" [ea100ffe-c03c-495d-bb02-d7340382cb8b] Running
	I1213 11:44:29.478523  567960 system_pods.go:89] "kube-apiserver-pause-649359" [b7810941-e870-480f-be70-6b27f530961c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 11:44:29.478530  567960 system_pods.go:89] "kube-controller-manager-pause-649359" [6bc3d848-1c2b-4fd9-b647-91ac5edd2968] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 11:44:29.478538  567960 system_pods.go:89] "kube-proxy-4p5n9" [be1a6262-3cc1-43f5-8671-3e19f21ba33e] Running
	I1213 11:44:29.478544  567960 system_pods.go:89] "kube-scheduler-pause-649359" [47261308-7ddf-41ea-a700-278e13986378] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 11:44:29.478550  567960 system_pods.go:126] duration metric: took 4.418715ms to wait for k8s-apps to be running ...
	I1213 11:44:29.478589  567960 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 11:44:29.478658  567960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:44:29.498704  567960 system_svc.go:56] duration metric: took 20.106152ms WaitForService to wait for kubelet
	I1213 11:44:29.498736  567960 kubeadm.go:587] duration metric: took 4.388622719s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:44:29.498755  567960 node_conditions.go:102] verifying NodePressure condition ...
	I1213 11:44:29.504601  567960 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 11:44:29.504645  567960 node_conditions.go:123] node cpu capacity is 2
	I1213 11:44:29.504658  567960 node_conditions.go:105] duration metric: took 5.89746ms to run NodePressure ...
	I1213 11:44:29.504672  567960 start.go:242] waiting for startup goroutines ...
	I1213 11:44:29.504680  567960 start.go:247] waiting for cluster config update ...
	I1213 11:44:29.504701  567960 start.go:256] writing updated cluster config ...
	I1213 11:44:29.505053  567960 ssh_runner.go:195] Run: rm -f paused
	I1213 11:44:29.510956  567960 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 11:44:29.511748  567960 kapi.go:59] client config for pause-649359: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/client.crt", KeyFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/profiles/pause-649359/client.key", CAFile:"/home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 11:44:29.517163  567960 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-g2449" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 11:44:31.522992  567960 pod_ready.go:104] pod "coredns-66bc5c9577-g2449" is not "Ready", error: <nil>
	W1213 11:44:33.527961  567960 pod_ready.go:104] pod "coredns-66bc5c9577-g2449" is not "Ready", error: <nil>
	W1213 11:44:36.023418  567960 pod_ready.go:104] pod "coredns-66bc5c9577-g2449" is not "Ready", error: <nil>
	I1213 11:44:37.023804  567960 pod_ready.go:94] pod "coredns-66bc5c9577-g2449" is "Ready"
	I1213 11:44:37.023837  567960 pod_ready.go:86] duration metric: took 7.506647215s for pod "coredns-66bc5c9577-g2449" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:44:37.027394  567960 pod_ready.go:83] waiting for pod "etcd-pause-649359" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 11:44:39.033572  567960 pod_ready.go:104] pod "etcd-pause-649359" is not "Ready", error: <nil>
	W1213 11:44:41.035919  567960 pod_ready.go:104] pod "etcd-pause-649359" is not "Ready", error: <nil>
	W1213 11:44:43.533057  567960 pod_ready.go:104] pod "etcd-pause-649359" is not "Ready", error: <nil>
	I1213 11:44:44.034694  567960 pod_ready.go:94] pod "etcd-pause-649359" is "Ready"
	I1213 11:44:44.034724  567960 pod_ready.go:86] duration metric: took 7.00730098s for pod "etcd-pause-649359" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:44:44.038051  567960 pod_ready.go:83] waiting for pod "kube-apiserver-pause-649359" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:44:44.044067  567960 pod_ready.go:94] pod "kube-apiserver-pause-649359" is "Ready"
	I1213 11:44:44.044099  567960 pod_ready.go:86] duration metric: took 6.019413ms for pod "kube-apiserver-pause-649359" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:44:44.047195  567960 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-649359" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:44:44.053164  567960 pod_ready.go:94] pod "kube-controller-manager-pause-649359" is "Ready"
	I1213 11:44:44.053196  567960 pod_ready.go:86] duration metric: took 5.970962ms for pod "kube-controller-manager-pause-649359" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:44:44.056461  567960 pod_ready.go:83] waiting for pod "kube-proxy-4p5n9" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:44:44.231436  567960 pod_ready.go:94] pod "kube-proxy-4p5n9" is "Ready"
	I1213 11:44:44.231467  567960 pod_ready.go:86] duration metric: took 174.976547ms for pod "kube-proxy-4p5n9" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:44:44.430513  567960 pod_ready.go:83] waiting for pod "kube-scheduler-pause-649359" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:44:44.831361  567960 pod_ready.go:94] pod "kube-scheduler-pause-649359" is "Ready"
	I1213 11:44:44.831388  567960 pod_ready.go:86] duration metric: took 400.84446ms for pod "kube-scheduler-pause-649359" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:44:44.831401  567960 pod_ready.go:40] duration metric: took 15.32039938s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 11:44:44.883862  567960 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1213 11:44:44.887266  567960 out.go:179] * Done! kubectl is now configured to use "pause-649359" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 11:44:24 pause-649359 crio[2134]: time="2025-12-13T11:44:24.891313689Z" level=info msg="Started container" PID=2405 containerID=77ddb3b808ba3a3c66122b470193d17e950914cc192aeaa1ce537982978c176c description=kube-system/coredns-66bc5c9577-g2449/coredns id=1ba4f64e-744f-4b5e-b580-b90fb81ac8f9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e1ba17b8c6fb5148f17d586be6c9c21c081de4667ddd07449123fe26bc1429e8
	Dec 13 11:44:24 pause-649359 crio[2134]: time="2025-12-13T11:44:24.927782824Z" level=info msg="Created container 1d4978198e9f7a6747ea2655f6b6de52519dd4ed62f82a58b2e2c4e1ef98bbbe: kube-system/etcd-pause-649359/etcd" id=b178e0cc-34cd-4fc3-8d07-ea691a14ef57 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:44:24 pause-649359 crio[2134]: time="2025-12-13T11:44:24.930680382Z" level=info msg="Starting container: 1d4978198e9f7a6747ea2655f6b6de52519dd4ed62f82a58b2e2c4e1ef98bbbe" id=04573513-840d-4a1a-bc69-44f1ef6204c7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:44:24 pause-649359 crio[2134]: time="2025-12-13T11:44:24.935653094Z" level=info msg="Started container" PID=2411 containerID=1d4978198e9f7a6747ea2655f6b6de52519dd4ed62f82a58b2e2c4e1ef98bbbe description=kube-system/etcd-pause-649359/etcd id=04573513-840d-4a1a-bc69-44f1ef6204c7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=97f636fbd4353e095d1a7d30fdb5c7308acd123f1a4d1c556b44c89447b3fd34
	Dec 13 11:44:25 pause-649359 crio[2134]: time="2025-12-13T11:44:25.38303171Z" level=info msg="Created container a8e21604e02771850162284cea6434cc4c66c53872a261c1d6c46f67be07725e: kube-system/kube-proxy-4p5n9/kube-proxy" id=49aa4f2a-f3da-4def-a690-18d12ecbb4ca name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:44:25 pause-649359 crio[2134]: time="2025-12-13T11:44:25.383856568Z" level=info msg="Starting container: a8e21604e02771850162284cea6434cc4c66c53872a261c1d6c46f67be07725e" id=43480b78-8ada-496b-943f-d90c1086c26c name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:44:25 pause-649359 crio[2134]: time="2025-12-13T11:44:25.38636996Z" level=info msg="Started container" PID=2414 containerID=a8e21604e02771850162284cea6434cc4c66c53872a261c1d6c46f67be07725e description=kube-system/kube-proxy-4p5n9/kube-proxy id=43480b78-8ada-496b-943f-d90c1086c26c name=/runtime.v1.RuntimeService/StartContainer sandboxID=bbd0d9b2bb111ca275da2cc8eac154610425f9e1660d17fccd65affdacbc87e9
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.228168106Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.231777685Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.231815306Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.231838502Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.235335062Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.235374948Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.235397315Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.238623162Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.238657649Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.238680607Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.2418123Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.241845604Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.241868431Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.244885449Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.244920427Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.244942376Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.248016067Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:44:35 pause-649359 crio[2134]: time="2025-12-13T11:44:35.248048272Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a8e21604e0277       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                     25 seconds ago       Running             kube-proxy                1                   bbd0d9b2bb111       kube-proxy-4p5n9                       kube-system
	1d4978198e9f7       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                     25 seconds ago       Running             etcd                      1                   97f636fbd4353       etcd-pause-649359                      kube-system
	77ddb3b808ba3       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                     25 seconds ago       Running             coredns                   1                   e1ba17b8c6fb5       coredns-66bc5c9577-g2449               kube-system
	44ffe7a6e162c       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     25 seconds ago       Running             kindnet-cni               1                   1f898fa729ff0       kindnet-dlvx8                          kube-system
	69ba67493b5e9       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                     25 seconds ago       Running             kube-controller-manager   1                   2fc9a3d4e2f7e       kube-controller-manager-pause-649359   kube-system
	84a10b3ea40e6       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                     25 seconds ago       Running             kube-apiserver            1                   40a8337351a1f       kube-apiserver-pause-649359            kube-system
	2000805fb9746       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                     25 seconds ago       Running             kube-scheduler            1                   71779f4f5c867       kube-scheduler-pause-649359            kube-system
	e1b0b540fa92c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                     37 seconds ago       Exited              coredns                   0                   e1ba17b8c6fb5       coredns-66bc5c9577-g2449               kube-system
	2a7de57d4a05a       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   48 seconds ago       Exited              kindnet-cni               0                   1f898fa729ff0       kindnet-dlvx8                          kube-system
	b42074f6e347e       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                     50 seconds ago       Exited              kube-proxy                0                   bbd0d9b2bb111       kube-proxy-4p5n9                       kube-system
	9a3d70eeb43c6       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                     About a minute ago   Exited              kube-apiserver            0                   40a8337351a1f       kube-apiserver-pause-649359            kube-system
	6c2fc3a72c623       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                     About a minute ago   Exited              kube-scheduler            0                   71779f4f5c867       kube-scheduler-pause-649359            kube-system
	efb2aed1c0f23       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                     About a minute ago   Exited              kube-controller-manager   0                   2fc9a3d4e2f7e       kube-controller-manager-pause-649359   kube-system
	e7c15552f9059       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                     About a minute ago   Exited              etcd                      0                   97f636fbd4353       etcd-pause-649359                      kube-system
	
	
	==> coredns [77ddb3b808ba3a3c66122b470193d17e950914cc192aeaa1ce537982978c176c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37184 - 58638 "HINFO IN 5910939681083719222.9078383055485944623. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.06058765s
	
	
	==> coredns [e1b0b540fa92caa18713aeeae9ab47a45eaf1e6ff2bb9db8d90bdbafb6ac6006] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49342 - 14843 "HINFO IN 1904354533378192754.2073838843292775205. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022532813s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-649359
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-649359
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
	                    minikube.k8s.io/name=pause-649359
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T11_43_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 11:43:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-649359
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 11:44:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 11:44:25 +0000   Sat, 13 Dec 2025 11:43:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 11:44:25 +0000   Sat, 13 Dec 2025 11:43:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 11:44:25 +0000   Sat, 13 Dec 2025 11:43:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 11:44:25 +0000   Sat, 13 Dec 2025 11:44:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-649359
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                97c343b6-2965-464e-b5a7-2664ffc95532
	  Boot ID:                    9bd24839-35d9-4392-a0e0-b2e0b9823eaa
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-g2449                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     51s
	  kube-system                 etcd-pause-649359                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         56s
	  kube-system                 kindnet-dlvx8                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      51s
	  kube-system                 kube-apiserver-pause-649359             250m (12%)    0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-controller-manager-pause-649359    200m (10%)    0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-proxy-4p5n9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 kube-scheduler-pause-649359             100m (5%)     0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 21s                kube-proxy       
	  Normal   Starting                 50s                kube-proxy       
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)  kubelet          Node pause-649359 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)  kubelet          Node pause-649359 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)  kubelet          Node pause-649359 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 56s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 56s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  56s                kubelet          Node pause-649359 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    56s                kubelet          Node pause-649359 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     56s                kubelet          Node pause-649359 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                node-controller  Node pause-649359 event: Registered Node pause-649359 in Controller
	  Normal   CIDRAssignmentFailed     52s                cidrAllocator    Node pause-649359 status is now: CIDRAssignmentFailed
	  Normal   NodeReady                38s                kubelet          Node pause-649359 status is now: NodeReady
	  Normal   RegisteredNode           20s                node-controller  Node pause-649359 event: Registered Node pause-649359 in Controller
	
	
	==> dmesg <==
	[  +3.372319] overlayfs: idmapped layers are currently not supported
	[ +34.539888] overlayfs: idmapped layers are currently not supported
	[Dec13 11:12] overlayfs: idmapped layers are currently not supported
	[Dec13 11:13] overlayfs: idmapped layers are currently not supported
	[  +3.803792] overlayfs: idmapped layers are currently not supported
	[Dec13 11:14] overlayfs: idmapped layers are currently not supported
	[ +27.964028] overlayfs: idmapped layers are currently not supported
	[Dec13 11:16] overlayfs: idmapped layers are currently not supported
	[Dec13 11:20] overlayfs: idmapped layers are currently not supported
	[ +35.182226] overlayfs: idmapped layers are currently not supported
	[Dec13 11:21] overlayfs: idmapped layers are currently not supported
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1d4978198e9f7a6747ea2655f6b6de52519dd4ed62f82a58b2e2c4e1ef98bbbe] <==
	{"level":"warn","ts":"2025-12-13T11:44:26.608078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.641389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.692868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.714549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.760559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.793363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.835702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.888664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.903606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.935896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.963454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.981067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:26.992994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.013942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.064206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.074635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.092177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.102569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.125755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.140281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.163183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.190905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.213536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.235199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:44:27.307604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54000","server-name":"","error":"EOF"}
	
	
	==> etcd [e7c15552f9059fa83530a9bacc61ee1252c8b4a381d93663a6f446492911bbf0] <==
	{"level":"warn","ts":"2025-12-13T11:43:50.444908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:43:50.466506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:43:50.487555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:43:50.548074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:43:50.549000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:43:50.568774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:43:50.663404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40314","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T11:44:16.662303Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-13T11:44:16.662356Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-649359","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-13T11:44:16.662469Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T11:44:16.933349Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-13T11:44:16.933434Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T11:44:16.933488Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-13T11:44:16.933564Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-13T11:44:16.933601Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-13T11:44:16.933609Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T11:44:16.933665Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-13T11:44:16.933687Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T11:44:16.933695Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-13T11:44:16.933662Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-13T11:44:16.933708Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T11:44:16.936981Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-13T11:44:16.937066Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-13T11:44:16.937103Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-13T11:44:16.937113Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-649359","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 11:44:50 up  3:27,  0 user,  load average: 2.19, 1.70, 1.83
	Linux pause-649359 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2a7de57d4a05a6dbf2502d22d4bbcaa154c070ec7964da22579d06bac5b9eb78] <==
	I1213 11:44:02.129355       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 11:44:02.130618       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 11:44:02.130795       1 main.go:148] setting mtu 1500 for CNI 
	I1213 11:44:02.130835       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 11:44:02.130874       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T11:44:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 11:44:02.335087       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 11:44:02.419589       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 11:44:02.419721       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 11:44:02.420671       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 11:44:02.620107       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 11:44:02.620205       1 metrics.go:72] Registering metrics
	I1213 11:44:02.620303       1 controller.go:711] "Syncing nftables rules"
	I1213 11:44:12.339596       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:44:12.339653       1 main.go:301] handling current node
	
	
	==> kindnet [44ffe7a6e162c4fee0b1aa729d805e20f9e028840e4af6b9ad5c9a656939c373] <==
	I1213 11:44:25.024764       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 11:44:25.027628       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 11:44:25.028226       1 main.go:148] setting mtu 1500 for CNI 
	I1213 11:44:25.029924       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 11:44:25.029985       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T11:44:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 11:44:25.232494       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 11:44:25.232576       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 11:44:25.232611       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 11:44:25.232807       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 11:44:28.535042       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 11:44:28.535153       1 metrics.go:72] Registering metrics
	I1213 11:44:28.535257       1 controller.go:711] "Syncing nftables rules"
	I1213 11:44:35.227730       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:44:35.227887       1 main.go:301] handling current node
	I1213 11:44:45.227696       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:44:45.227812       1 main.go:301] handling current node
	
	
	==> kube-apiserver [84a10b3ea40e6601774b3e6db2ab95691ab784e51dfcf7098ff258d7a9b6e9c8] <==
	I1213 11:44:28.418690       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 11:44:28.447829       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1213 11:44:28.447871       1 policy_source.go:240] refreshing policies
	I1213 11:44:28.447910       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1213 11:44:28.447954       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1213 11:44:28.447982       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 11:44:28.447996       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 11:44:28.448112       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 11:44:28.465823       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1213 11:44:28.486314       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 11:44:28.498726       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 11:44:28.522610       1 cache.go:39] Caches are synced for autoregister controller
	I1213 11:44:28.522806       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 11:44:28.502643       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 11:44:28.523140       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 11:44:28.530748       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 11:44:28.531164       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 11:44:28.542934       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1213 11:44:28.552614       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1213 11:44:28.998753       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 11:44:29.409718       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 11:44:30.873800       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 11:44:31.072816       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 11:44:31.122831       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 11:44:31.174936       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [9a3d70eeb43c6a48062b267c0c45c9e29ff5980c55cb4522b094c5996bcc7629] <==
	W1213 11:44:16.690104       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.690564       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.690642       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.690698       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.690752       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.690804       1 logging.go:55] [core] [Channel #17 SubChannel #22]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.690853       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.690909       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.690958       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691027       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691082       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691134       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691186       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691422       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691507       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691585       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691649       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691708       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691761       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691814       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691870       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.691923       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.692120       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1213 11:44:16.692575       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [69ba67493b5e941069b4105a9966d8e72af04aa74f3451c67b59e1792d39f2a7] <==
	I1213 11:44:30.784622       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 11:44:30.788835       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1213 11:44:30.792110       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 11:44:30.794354       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 11:44:30.797541       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1213 11:44:30.797626       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1213 11:44:30.797696       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-649359"
	I1213 11:44:30.797743       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1213 11:44:30.800064       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 11:44:30.802286       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 11:44:30.803507       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 11:44:30.808938       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 11:44:30.817133       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 11:44:30.817151       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 11:44:30.817170       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 11:44:30.817187       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1213 11:44:30.820407       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 11:44:30.821464       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 11:44:30.825918       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 11:44:30.828072       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 11:44:30.830352       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 11:44:30.836658       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 11:44:30.844861       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 11:44:30.844888       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 11:44:30.844895       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [efb2aed1c0f23cd8fd9bd689dbbbcc305a654f6d718d16aded0b188a0f85575e] <==
	I1213 11:43:58.635756       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1213 11:43:58.635786       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1213 11:43:58.635813       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 11:43:58.635873       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1213 11:43:58.636195       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 11:43:58.636255       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 11:43:58.636332       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 11:43:58.640005       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 11:43:58.640083       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 11:43:58.640269       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 11:43:58.640546       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 11:43:58.640573       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1213 11:43:58.640623       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 11:43:58.642690       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1213 11:43:58.642805       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1213 11:43:58.642867       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1213 11:43:58.642896       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1213 11:43:58.642924       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1213 11:43:58.647851       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 11:43:58.656401       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1213 11:43:58.659003       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-649359" podCIDRs=["10.244.0.0/24"]
	E1213 11:43:58.673094       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"pause-649359\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.1.0/24\",\"10.244.0.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="pause-649359" podCIDRs=["10.244.1.0/24"]
	E1213 11:43:58.673154       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"pause-649359\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.1.0/24\",\"10.244.0.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="pause-649359"
	E1213 11:43:58.673195       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'pause-649359': failed to patch node CIDR: Node \"pause-649359\" is invalid: [spec.podCIDRs: Invalid value: [\"10.244.1.0/24\",\"10.244.0.0/24\"]: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I1213 11:44:13.664518       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a8e21604e02771850162284cea6434cc4c66c53872a261c1d6c46f67be07725e] <==
	I1213 11:44:25.778028       1 server_linux.go:53] "Using iptables proxy"
	I1213 11:44:26.524609       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 11:44:28.611552       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 11:44:28.619563       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1213 11:44:28.624337       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 11:44:28.733708       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 11:44:28.733769       1 server_linux.go:132] "Using iptables Proxier"
	I1213 11:44:28.761239       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 11:44:28.761681       1 server.go:527] "Version info" version="v1.34.2"
	I1213 11:44:28.761999       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:44:28.763474       1 config.go:200] "Starting service config controller"
	I1213 11:44:28.791035       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 11:44:28.769257       1 config.go:106] "Starting endpoint slice config controller"
	I1213 11:44:28.791250       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 11:44:28.769288       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 11:44:28.791349       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 11:44:28.790882       1 config.go:309] "Starting node config controller"
	I1213 11:44:28.791414       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 11:44:28.791561       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 11:44:28.891463       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 11:44:28.891589       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 11:44:28.891605       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b42074f6e347ec4d0c964e8198f85e6532b0014bb1e60750a212fd0a9a4a62a1] <==
	I1213 11:43:59.868699       1 server_linux.go:53] "Using iptables proxy"
	I1213 11:43:59.969032       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 11:44:00.084166       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 11:44:00.084237       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1213 11:44:00.084375       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 11:44:00.374702       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 11:44:00.374780       1 server_linux.go:132] "Using iptables Proxier"
	I1213 11:44:00.450080       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 11:44:00.450486       1 server.go:527] "Version info" version="v1.34.2"
	I1213 11:44:00.450501       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:44:00.458151       1 config.go:200] "Starting service config controller"
	I1213 11:44:00.458184       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 11:44:00.458217       1 config.go:106] "Starting endpoint slice config controller"
	I1213 11:44:00.458222       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 11:44:00.458236       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 11:44:00.458240       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 11:44:00.459001       1 config.go:309] "Starting node config controller"
	I1213 11:44:00.459021       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 11:44:00.459029       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 11:44:00.559982       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 11:44:00.560020       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 11:44:00.560065       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2000805fb9746ade340bf4b45bcf2b3c8530d52c36d96e3640838f94c9200163] <==
	I1213 11:44:28.085689       1 serving.go:386] Generated self-signed cert in-memory
	I1213 11:44:29.044078       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 11:44:29.044188       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:44:29.066735       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 11:44:29.066823       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1213 11:44:29.066852       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1213 11:44:29.066874       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 11:44:29.083396       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:44:29.083428       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:44:29.083446       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 11:44:29.083451       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 11:44:29.167299       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1213 11:44:29.183963       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 11:44:29.184090       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [6c2fc3a72c623073db614eabbbfa9a65bc0404d50c8390c5b66c60b9c9862e42] <==
	E1213 11:43:52.718195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 11:43:52.718307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 11:43:52.718345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 11:43:52.718375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 11:43:52.721016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 11:43:52.721141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 11:43:52.721248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 11:43:52.721541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 11:43:52.721653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 11:43:52.721760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 11:43:52.721864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 11:43:52.721957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 11:43:52.722188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 11:43:52.722320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 11:43:52.722424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 11:43:52.723168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 11:43:52.724770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 11:43:52.724775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1213 11:43:54.211090       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:44:16.670556       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1213 11:44:16.670585       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1213 11:44:16.670608       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1213 11:44:16.670635       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:44:16.670972       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1213 11:44:16.670996       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.617887    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="30e3c7d20b09ce02b630882db0497c98" pod="kube-system/kube-scheduler-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.618064    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="205e683250695d1a163b559880967ff2" pod="kube-system/etcd-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.618219    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="40b13f97116b19b789e50b2d595988bc" pod="kube-system/kube-apiserver-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.618368    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="119a20aa2837a3826d8f18f6cb4520f6" pod="kube-system/kube-controller-manager-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.618520    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4p5n9\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="be1a6262-3cc1-43f5-8671-3e19f21ba33e" pod="kube-system/kube-proxy-4p5n9"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: I1213 11:44:24.620947    1331 scope.go:117] "RemoveContainer" containerID="2a7de57d4a05a6dbf2502d22d4bbcaa154c070ec7964da22579d06bac5b9eb78"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.621688    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="119a20aa2837a3826d8f18f6cb4520f6" pod="kube-system/kube-controller-manager-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.621984    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-dlvx8\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ea100ffe-c03c-495d-bb02-d7340382cb8b" pod="kube-system/kindnet-dlvx8"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.622255    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4p5n9\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="be1a6262-3cc1-43f5-8671-3e19f21ba33e" pod="kube-system/kube-proxy-4p5n9"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.622515    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="30e3c7d20b09ce02b630882db0497c98" pod="kube-system/kube-scheduler-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.622773    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="205e683250695d1a163b559880967ff2" pod="kube-system/etcd-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.623298    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="40b13f97116b19b789e50b2d595988bc" pod="kube-system/kube-apiserver-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: I1213 11:44:24.645130    1331 scope.go:117] "RemoveContainer" containerID="e1b0b540fa92caa18713aeeae9ab47a45eaf1e6ff2bb9db8d90bdbafb6ac6006"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.645556    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-g2449\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="e851be5d-0744-4a63-8a57-05546a6999f2" pod="kube-system/coredns-66bc5c9577-g2449"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.645850    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="30e3c7d20b09ce02b630882db0497c98" pod="kube-system/kube-scheduler-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.646125    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="205e683250695d1a163b559880967ff2" pod="kube-system/etcd-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.646455    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="40b13f97116b19b789e50b2d595988bc" pod="kube-system/kube-apiserver-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.646769    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-649359\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="119a20aa2837a3826d8f18f6cb4520f6" pod="kube-system/kube-controller-manager-pause-649359"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.647084    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-dlvx8\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="ea100ffe-c03c-495d-bb02-d7340382cb8b" pod="kube-system/kindnet-dlvx8"
	Dec 13 11:44:24 pause-649359 kubelet[1331]: E1213 11:44:24.647375    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4p5n9\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="be1a6262-3cc1-43f5-8671-3e19f21ba33e" pod="kube-system/kube-proxy-4p5n9"
	Dec 13 11:44:34 pause-649359 kubelet[1331]: W1213 11:44:34.383620    1331 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 13 11:44:44 pause-649359 kubelet[1331]: W1213 11:44:44.398372    1331 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 13 11:44:45 pause-649359 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 11:44:45 pause-649359 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 11:44:45 pause-649359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-649359 -n pause-649359
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-649359 -n pause-649359: exit status 2 (348.293861ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-649359 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-051699 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-051699 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (292.351363ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:48:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-051699 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-051699 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-051699 describe deploy/metrics-server -n kube-system: exit status 1 (88.250718ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-051699 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-051699
helpers_test.go:244: (dbg) docker inspect old-k8s-version-051699:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b",
	        "Created": "2025-12-13T11:47:09.22535414Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 583298,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:47:09.293567242Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b/hosts",
	        "LogPath": "/var/lib/docker/containers/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b-json.log",
	        "Name": "/old-k8s-version-051699",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-051699:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-051699",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b",
	                "LowerDir": "/var/lib/docker/overlay2/1c423ef5860499aa61df7154bd586dc48068ab3f3f5a53705d2a5cd6e312a520-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c423ef5860499aa61df7154bd586dc48068ab3f3f5a53705d2a5cd6e312a520/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c423ef5860499aa61df7154bd586dc48068ab3f3f5a53705d2a5cd6e312a520/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c423ef5860499aa61df7154bd586dc48068ab3f3f5a53705d2a5cd6e312a520/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-051699",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-051699/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-051699",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-051699",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-051699",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0c7f3d4d8742ced0a2dd369b0066a5081da53ba5812fdf8f6f6bec1e0e641af",
	            "SandboxKey": "/var/run/docker/netns/a0c7f3d4d874",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-051699": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:9e:23:7f:34:99",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a6116ab229e22ee69821bd47d3a0f489af279d0545fa10007411817efdd59740",
	                    "EndpointID": "e9b2fa2fcfa66ad0e2121063bfb44fcbde117007ca8c491e61f45dd26c647c32",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-051699",
	                        "5e184c16699d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-051699 -n old-k8s-version-051699
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-051699 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-051699 logs -n 25: (1.245300625s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-062409 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo containerd config dump                                                                                                                                                                                                  │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo crio config                                                                                                                                                                                                             │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ delete  │ -p cilium-062409                                                                                                                                                                                                                              │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │ 13 Dec 25 11:45 UTC │
	│ start   │ -p force-systemd-env-181508 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-181508  │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │ 13 Dec 25 11:46 UTC │
	│ delete  │ -p kubernetes-upgrade-854588                                                                                                                                                                                                                  │ kubernetes-upgrade-854588 │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ start   │ -p cert-expiration-420007 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-420007    │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ delete  │ -p force-systemd-env-181508                                                                                                                                                                                                                   │ force-systemd-env-181508  │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ start   │ -p cert-options-522461 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-522461       │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ ssh     │ cert-options-522461 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-522461       │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:47 UTC │
	│ ssh     │ -p cert-options-522461 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-522461       │ jenkins │ v1.37.0 │ 13 Dec 25 11:47 UTC │ 13 Dec 25 11:47 UTC │
	│ delete  │ -p cert-options-522461                                                                                                                                                                                                                        │ cert-options-522461       │ jenkins │ v1.37.0 │ 13 Dec 25 11:47 UTC │ 13 Dec 25 11:47 UTC │
	│ start   │ -p old-k8s-version-051699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-051699    │ jenkins │ v1.37.0 │ 13 Dec 25 11:47 UTC │ 13 Dec 25 11:48 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-051699 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-051699    │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:47:03
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:47:03.026386  582867 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:47:03.026510  582867 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:47:03.026522  582867 out.go:374] Setting ErrFile to fd 2...
	I1213 11:47:03.026527  582867 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:47:03.026787  582867 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:47:03.027269  582867 out.go:368] Setting JSON to false
	I1213 11:47:03.028224  582867 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12575,"bootTime":1765613848,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:47:03.028296  582867 start.go:143] virtualization:  
	I1213 11:47:03.031788  582867 out.go:179] * [old-k8s-version-051699] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:47:03.035938  582867 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:47:03.036085  582867 notify.go:221] Checking for updates...
	I1213 11:47:03.042273  582867 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:47:03.045298  582867 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:47:03.048342  582867 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:47:03.051233  582867 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:47:03.054271  582867 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:47:03.057927  582867 config.go:182] Loaded profile config "cert-expiration-420007": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:47:03.058046  582867 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:47:03.091095  582867 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:47:03.091241  582867 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:47:03.160127  582867 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:47:03.150309194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:47:03.160240  582867 docker.go:319] overlay module found
	I1213 11:47:03.163375  582867 out.go:179] * Using the docker driver based on user configuration
	I1213 11:47:03.166213  582867 start.go:309] selected driver: docker
	I1213 11:47:03.166232  582867 start.go:927] validating driver "docker" against <nil>
	I1213 11:47:03.166245  582867 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:47:03.167001  582867 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:47:03.225398  582867 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:47:03.216701563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:47:03.225565  582867 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 11:47:03.225792  582867 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:47:03.228904  582867 out.go:179] * Using Docker driver with root privileges
	I1213 11:47:03.231750  582867 cni.go:84] Creating CNI manager for ""
	I1213 11:47:03.231819  582867 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:47:03.231833  582867 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 11:47:03.231919  582867 start.go:353] cluster config:
	{Name:old-k8s-version-051699 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-051699 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:47:03.235099  582867 out.go:179] * Starting "old-k8s-version-051699" primary control-plane node in "old-k8s-version-051699" cluster
	I1213 11:47:03.238010  582867 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:47:03.240959  582867 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:47:03.243677  582867 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 11:47:03.243729  582867 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1213 11:47:03.243755  582867 cache.go:65] Caching tarball of preloaded images
	I1213 11:47:03.243767  582867 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:47:03.243841  582867 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 11:47:03.243852  582867 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1213 11:47:03.243959  582867 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/config.json ...
	I1213 11:47:03.243985  582867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/config.json: {Name:mka4d2b92a18edc18045656ad69ed8aa0d008889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:47:03.262673  582867 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:47:03.262697  582867 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:47:03.262717  582867 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:47:03.262781  582867 start.go:360] acquireMachinesLock for old-k8s-version-051699: {Name:mk7421d20807d926bcc4f5128055e7d390596771 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:47:03.262924  582867 start.go:364] duration metric: took 112.903µs to acquireMachinesLock for "old-k8s-version-051699"
	I1213 11:47:03.262957  582867 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-051699 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-051699 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:47:03.263030  582867 start.go:125] createHost starting for "" (driver="docker")
	I1213 11:47:03.266635  582867 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 11:47:03.266877  582867 start.go:159] libmachine.API.Create for "old-k8s-version-051699" (driver="docker")
	I1213 11:47:03.266920  582867 client.go:173] LocalClient.Create starting
	I1213 11:47:03.267008  582867 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem
	I1213 11:47:03.267051  582867 main.go:143] libmachine: Decoding PEM data...
	I1213 11:47:03.267070  582867 main.go:143] libmachine: Parsing certificate...
	I1213 11:47:03.267130  582867 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem
	I1213 11:47:03.267155  582867 main.go:143] libmachine: Decoding PEM data...
	I1213 11:47:03.267171  582867 main.go:143] libmachine: Parsing certificate...
	I1213 11:47:03.267565  582867 cli_runner.go:164] Run: docker network inspect old-k8s-version-051699 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 11:47:03.298464  582867 cli_runner.go:211] docker network inspect old-k8s-version-051699 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 11:47:03.298555  582867 network_create.go:284] running [docker network inspect old-k8s-version-051699] to gather additional debugging logs...
	I1213 11:47:03.298577  582867 cli_runner.go:164] Run: docker network inspect old-k8s-version-051699
	W1213 11:47:03.319423  582867 cli_runner.go:211] docker network inspect old-k8s-version-051699 returned with exit code 1
	I1213 11:47:03.319456  582867 network_create.go:287] error running [docker network inspect old-k8s-version-051699]: docker network inspect old-k8s-version-051699: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-051699 not found
	I1213 11:47:03.319487  582867 network_create.go:289] output of [docker network inspect old-k8s-version-051699]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-051699 not found
	
	** /stderr **
	I1213 11:47:03.319620  582867 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:47:03.342992  582867 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0545902499c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:32:4c:cb:8d:7b} reservation:<nil>}
	I1213 11:47:03.343375  582867 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-de5fe2fbe3b8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:54:47:7f:e7:3a} reservation:<nil>}
	I1213 11:47:03.343684  582867 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7c96683190e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:0a:60:46:c5:4a} reservation:<nil>}
	I1213 11:47:03.343960  582867 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-006bee450fcf IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3a:e6:07:82:40:2c} reservation:<nil>}
	I1213 11:47:03.344386  582867 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a211c0}
	I1213 11:47:03.344405  582867 network_create.go:124] attempt to create docker network old-k8s-version-051699 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 11:47:03.344457  582867 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-051699 old-k8s-version-051699
	I1213 11:47:03.416081  582867 network_create.go:108] docker network old-k8s-version-051699 192.168.85.0/24 created
	I1213 11:47:03.416114  582867 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-051699" container
	I1213 11:47:03.416197  582867 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 11:47:03.433595  582867 cli_runner.go:164] Run: docker volume create old-k8s-version-051699 --label name.minikube.sigs.k8s.io=old-k8s-version-051699 --label created_by.minikube.sigs.k8s.io=true
	I1213 11:47:03.453730  582867 oci.go:103] Successfully created a docker volume old-k8s-version-051699
	I1213 11:47:03.453820  582867 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-051699-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-051699 --entrypoint /usr/bin/test -v old-k8s-version-051699:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 11:47:04.138229  582867 oci.go:107] Successfully prepared a docker volume old-k8s-version-051699
	I1213 11:47:04.138347  582867 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 11:47:04.138363  582867 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 11:47:04.138450  582867 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-051699:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 11:47:09.151129  582867 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-051699:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (5.012639208s)
	I1213 11:47:09.151166  582867 kic.go:203] duration metric: took 5.012800079s to extract preloaded images to volume ...
	W1213 11:47:09.151310  582867 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 11:47:09.151437  582867 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 11:47:09.209616  582867 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-051699 --name old-k8s-version-051699 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-051699 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-051699 --network old-k8s-version-051699 --ip 192.168.85.2 --volume old-k8s-version-051699:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 11:47:09.528389  582867 cli_runner.go:164] Run: docker container inspect old-k8s-version-051699 --format={{.State.Running}}
	I1213 11:47:09.555801  582867 cli_runner.go:164] Run: docker container inspect old-k8s-version-051699 --format={{.State.Status}}
	I1213 11:47:09.584760  582867 cli_runner.go:164] Run: docker exec old-k8s-version-051699 stat /var/lib/dpkg/alternatives/iptables
	I1213 11:47:09.637515  582867 oci.go:144] the created container "old-k8s-version-051699" has a running status.
	I1213 11:47:09.637549  582867 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa...
	I1213 11:47:10.315759  582867 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 11:47:10.336929  582867 cli_runner.go:164] Run: docker container inspect old-k8s-version-051699 --format={{.State.Status}}
	I1213 11:47:10.354147  582867 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 11:47:10.354181  582867 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-051699 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 11:47:10.396325  582867 cli_runner.go:164] Run: docker container inspect old-k8s-version-051699 --format={{.State.Status}}
	I1213 11:47:10.415615  582867 machine.go:94] provisionDockerMachine start ...
	I1213 11:47:10.415728  582867 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:47:10.432277  582867 main.go:143] libmachine: Using SSH client type: native
	I1213 11:47:10.432626  582867 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1213 11:47:10.432641  582867 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:47:10.433322  582867 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56684->127.0.0.1:33428: read: connection reset by peer
	I1213 11:47:13.587124  582867 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-051699
	
	I1213 11:47:13.587149  582867 ubuntu.go:182] provisioning hostname "old-k8s-version-051699"
	I1213 11:47:13.587227  582867 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:47:13.605338  582867 main.go:143] libmachine: Using SSH client type: native
	I1213 11:47:13.605691  582867 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1213 11:47:13.605712  582867 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-051699 && echo "old-k8s-version-051699" | sudo tee /etc/hostname
	I1213 11:47:13.768495  582867 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-051699
	
	I1213 11:47:13.768569  582867 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:47:13.786774  582867 main.go:143] libmachine: Using SSH client type: native
	I1213 11:47:13.787107  582867 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1213 11:47:13.787130  582867 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-051699' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-051699/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-051699' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:47:13.935807  582867 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:47:13.935847  582867 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 11:47:13.935872  582867 ubuntu.go:190] setting up certificates
	I1213 11:47:13.935881  582867 provision.go:84] configureAuth start
	I1213 11:47:13.935947  582867 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-051699
	I1213 11:47:13.954720  582867 provision.go:143] copyHostCerts
	I1213 11:47:13.954798  582867 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 11:47:13.954814  582867 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 11:47:13.954899  582867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 11:47:13.955006  582867 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 11:47:13.955020  582867 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 11:47:13.955049  582867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 11:47:13.955144  582867 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 11:47:13.955153  582867 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 11:47:13.955178  582867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 11:47:13.955229  582867 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-051699 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-051699]
	I1213 11:47:14.200790  582867 provision.go:177] copyRemoteCerts
	I1213 11:47:14.200893  582867 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:47:14.200953  582867 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:47:14.225287  582867 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:47:14.331410  582867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:47:14.349659  582867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1213 11:47:14.369047  582867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:47:14.388272  582867 provision.go:87] duration metric: took 452.372814ms to configureAuth
	I1213 11:47:14.388299  582867 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:47:14.388510  582867 config.go:182] Loaded profile config "old-k8s-version-051699": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 11:47:14.388625  582867 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:47:14.408118  582867 main.go:143] libmachine: Using SSH client type: native
	I1213 11:47:14.408468  582867 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1213 11:47:14.408492  582867 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:47:14.708314  582867 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:47:14.708341  582867 machine.go:97] duration metric: took 4.292699065s to provisionDockerMachine
	I1213 11:47:14.708353  582867 client.go:176] duration metric: took 11.441423274s to LocalClient.Create
	I1213 11:47:14.708368  582867 start.go:167] duration metric: took 11.441492049s to libmachine.API.Create "old-k8s-version-051699"
	I1213 11:47:14.708375  582867 start.go:293] postStartSetup for "old-k8s-version-051699" (driver="docker")
	I1213 11:47:14.708385  582867 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:47:14.708451  582867 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:47:14.708516  582867 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:47:14.727199  582867 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:47:14.831495  582867 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:47:14.834992  582867 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:47:14.835022  582867 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:47:14.835034  582867 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 11:47:14.835094  582867 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 11:47:14.835182  582867 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 11:47:14.835289  582867 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:47:14.843142  582867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:47:14.861590  582867 start.go:296] duration metric: took 153.199755ms for postStartSetup
	I1213 11:47:14.861968  582867 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-051699
	I1213 11:47:14.881747  582867 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/config.json ...
	I1213 11:47:14.882033  582867 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:47:14.882083  582867 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:47:14.899617  582867 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:47:15.020286  582867 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:47:15.025859  582867 start.go:128] duration metric: took 11.762813018s to createHost
	I1213 11:47:15.025888  582867 start.go:83] releasing machines lock for "old-k8s-version-051699", held for 11.762951176s
	I1213 11:47:15.025979  582867 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-051699
	I1213 11:47:15.044694  582867 ssh_runner.go:195] Run: cat /version.json
	I1213 11:47:15.044724  582867 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:47:15.044749  582867 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:47:15.044792  582867 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:47:15.064671  582867 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:47:15.085804  582867 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:47:15.171415  582867 ssh_runner.go:195] Run: systemctl --version
	I1213 11:47:15.266368  582867 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:47:15.306760  582867 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:47:15.311189  582867 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:47:15.311264  582867 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:47:15.338732  582867 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 11:47:15.338763  582867 start.go:496] detecting cgroup driver to use...
	I1213 11:47:15.338795  582867 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:47:15.338856  582867 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:47:15.355825  582867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:47:15.368371  582867 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:47:15.368473  582867 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:47:15.386016  582867 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:47:15.403958  582867 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:47:15.515251  582867 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:47:15.635703  582867 docker.go:234] disabling docker service ...
	I1213 11:47:15.635795  582867 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:47:15.656717  582867 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:47:15.670764  582867 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:47:15.786936  582867 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:47:15.903420  582867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:47:15.916596  582867 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:47:15.930749  582867 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1213 11:47:15.930838  582867 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:47:15.940194  582867 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:47:15.940288  582867 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:47:15.949747  582867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:47:15.959317  582867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:47:15.968984  582867 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:47:15.977026  582867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:47:15.986014  582867 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:47:16.004291  582867 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:47:16.016460  582867 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:47:16.025585  582867 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:47:16.033705  582867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:47:16.154379  582867 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:47:16.335025  582867 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:47:16.335095  582867 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:47:16.338846  582867 start.go:564] Will wait 60s for crictl version
	I1213 11:47:16.338956  582867 ssh_runner.go:195] Run: which crictl
	I1213 11:47:16.342295  582867 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:47:16.371968  582867 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:47:16.372053  582867 ssh_runner.go:195] Run: crio --version
	I1213 11:47:16.400322  582867 ssh_runner.go:195] Run: crio --version
	I1213 11:47:16.433609  582867 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1213 11:47:16.436506  582867 cli_runner.go:164] Run: docker network inspect old-k8s-version-051699 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:47:16.456057  582867 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 11:47:16.460010  582867 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:47:16.470147  582867 kubeadm.go:884] updating cluster {Name:old-k8s-version-051699 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-051699 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:47:16.470263  582867 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 11:47:16.470321  582867 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:47:16.507829  582867 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:47:16.507857  582867 crio.go:433] Images already preloaded, skipping extraction
	I1213 11:47:16.507912  582867 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:47:16.534237  582867 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:47:16.534261  582867 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:47:16.534269  582867 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1213 11:47:16.534352  582867 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-051699 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-051699 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:47:16.534436  582867 ssh_runner.go:195] Run: crio config
	I1213 11:47:16.606448  582867 cni.go:84] Creating CNI manager for ""
	I1213 11:47:16.606473  582867 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:47:16.606489  582867 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:47:16.606541  582867 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-051699 NodeName:old-k8s-version-051699 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:47:16.606743  582867 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-051699"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:47:16.606844  582867 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1213 11:47:16.614670  582867 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:47:16.614766  582867 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:47:16.622381  582867 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1213 11:47:16.635897  582867 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:47:16.650114  582867 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1213 11:47:16.664020  582867 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:47:16.667687  582867 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:47:16.677869  582867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:47:16.787746  582867 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:47:16.805378  582867 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699 for IP: 192.168.85.2
	I1213 11:47:16.805444  582867 certs.go:195] generating shared ca certs ...
	I1213 11:47:16.805476  582867 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:47:16.805656  582867 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:47:16.805733  582867 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:47:16.805769  582867 certs.go:257] generating profile certs ...
	I1213 11:47:16.805848  582867 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.key
	I1213 11:47:16.805886  582867 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt with IP's: []
	I1213 11:47:17.163362  582867 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt ...
	I1213 11:47:17.163394  582867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: {Name:mk3b667dbc57a6cc1bc130d7274679232bd26b48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:47:17.163638  582867 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.key ...
	I1213 11:47:17.163655  582867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.key: {Name:mk89629b4efb095fc4979276575c8787fc953730 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:47:17.163757  582867 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/apiserver.key.8b85897d
	I1213 11:47:17.163777  582867 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/apiserver.crt.8b85897d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 11:47:17.401941  582867 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/apiserver.crt.8b85897d ...
	I1213 11:47:17.401975  582867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/apiserver.crt.8b85897d: {Name:mkcb82240e2cafc3b34c5996d72fc45fca9326bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:47:17.402162  582867 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/apiserver.key.8b85897d ...
	I1213 11:47:17.402181  582867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/apiserver.key.8b85897d: {Name:mkc9ef95c4fff93d9ae0d0c1151a9968e22f4bf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:47:17.402271  582867 certs.go:382] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/apiserver.crt.8b85897d -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/apiserver.crt
	I1213 11:47:17.402355  582867 certs.go:386] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/apiserver.key.8b85897d -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/apiserver.key
	I1213 11:47:17.402421  582867 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/proxy-client.key
	I1213 11:47:17.402440  582867 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/proxy-client.crt with IP's: []
	I1213 11:47:17.811806  582867 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/proxy-client.crt ...
	I1213 11:47:17.811839  582867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/proxy-client.crt: {Name:mk64de5b67feb1af92d63eacbfe59ae5f190ac90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:47:17.812013  582867 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/proxy-client.key ...
	I1213 11:47:17.812032  582867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/proxy-client.key: {Name:mke9465de527351b894512f7035af6151ed1be61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:47:17.812211  582867 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:47:17.812262  582867 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:47:17.812277  582867 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:47:17.812305  582867 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:47:17.812340  582867 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:47:17.812370  582867 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:47:17.812419  582867 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:47:17.812974  582867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:47:17.831138  582867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:47:17.850695  582867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:47:17.873461  582867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:47:17.892023  582867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 11:47:17.909987  582867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:47:17.927994  582867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:47:17.946031  582867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:47:17.963436  582867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:47:17.981365  582867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:47:17.999141  582867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:47:18.031797  582867 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:47:18.046291  582867 ssh_runner.go:195] Run: openssl version
	I1213 11:47:18.052813  582867 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:47:18.060633  582867 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:47:18.068943  582867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:47:18.073199  582867 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:47:18.073365  582867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:47:18.115561  582867 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:47:18.123213  582867 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 11:47:18.130800  582867 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:47:18.138302  582867 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:47:18.145728  582867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:47:18.149361  582867 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:47:18.149450  582867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:47:18.190650  582867 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:47:18.198245  582867 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/356328.pem /etc/ssl/certs/51391683.0
	I1213 11:47:18.206022  582867 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:47:18.214645  582867 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:47:18.222873  582867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:47:18.226517  582867 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:47:18.226584  582867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:47:18.270332  582867 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:47:18.279030  582867 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3563282.pem /etc/ssl/certs/3ec20f2e.0
	I1213 11:47:18.288436  582867 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:47:18.293162  582867 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 11:47:18.293256  582867 kubeadm.go:401] StartCluster: {Name:old-k8s-version-051699 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-051699 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:47:18.293399  582867 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:47:18.293473  582867 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:47:18.329981  582867 cri.go:89] found id: ""
	I1213 11:47:18.330096  582867 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:47:18.340491  582867 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:47:18.348782  582867 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:47:18.348905  582867 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:47:18.359385  582867 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:47:18.359463  582867 kubeadm.go:158] found existing configuration files:
	
	I1213 11:47:18.359548  582867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:47:18.369758  582867 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:47:18.369869  582867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:47:18.377127  582867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:47:18.385025  582867 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:47:18.385119  582867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:47:18.392873  582867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:47:18.400657  582867 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:47:18.400757  582867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:47:18.408768  582867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:47:18.417103  582867 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:47:18.417165  582867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:47:18.424872  582867 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:47:18.472957  582867 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1213 11:47:18.473240  582867 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:47:18.510975  582867 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:47:18.511113  582867 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:47:18.511179  582867 kubeadm.go:319] OS: Linux
	I1213 11:47:18.511251  582867 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:47:18.511330  582867 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:47:18.511402  582867 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:47:18.511483  582867 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:47:18.511580  582867 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:47:18.511662  582867 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:47:18.511735  582867 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:47:18.511819  582867 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:47:18.511889  582867 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:47:18.592528  582867 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:47:18.592686  582867 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:47:18.592805  582867 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1213 11:47:18.754503  582867 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:47:18.758072  582867 out.go:252]   - Generating certificates and keys ...
	I1213 11:47:18.758220  582867 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:47:18.758323  582867 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:47:19.295397  582867 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:47:19.748742  582867 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:47:20.501578  582867 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:47:21.005599  582867 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 11:47:21.639490  582867 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 11:47:21.639828  582867 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-051699] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 11:47:22.510102  582867 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 11:47:22.510557  582867 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-051699] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 11:47:22.913137  582867 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 11:47:23.254045  582867 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 11:47:24.190212  582867 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 11:47:24.190500  582867 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:47:24.368989  582867 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:47:25.278966  582867 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:47:25.554818  582867 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:47:25.786089  582867 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:47:25.786723  582867 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:47:25.789381  582867 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:47:25.792715  582867 out.go:252]   - Booting up control plane ...
	I1213 11:47:25.792833  582867 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:47:25.792937  582867 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:47:25.793022  582867 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:47:25.816037  582867 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:47:25.816710  582867 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:47:25.816922  582867 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:47:25.960633  582867 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1213 11:47:33.463205  582867 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.502615 seconds
	I1213 11:47:33.463341  582867 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 11:47:33.479706  582867 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 11:47:34.007139  582867 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 11:47:34.007348  582867 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-051699 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 11:47:34.519833  582867 kubeadm.go:319] [bootstrap-token] Using token: ofw7rw.iqam4kpu7xka02jk
	I1213 11:47:34.522757  582867 out.go:252]   - Configuring RBAC rules ...
	I1213 11:47:34.522887  582867 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 11:47:34.527627  582867 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 11:47:34.537007  582867 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 11:47:34.544610  582867 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 11:47:34.548810  582867 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 11:47:34.556583  582867 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 11:47:34.572723  582867 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 11:47:34.849795  582867 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 11:47:34.933628  582867 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 11:47:34.935096  582867 kubeadm.go:319] 
	I1213 11:47:34.935171  582867 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 11:47:34.935176  582867 kubeadm.go:319] 
	I1213 11:47:34.935254  582867 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 11:47:34.935258  582867 kubeadm.go:319] 
	I1213 11:47:34.935284  582867 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 11:47:34.935368  582867 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 11:47:34.935419  582867 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 11:47:34.935423  582867 kubeadm.go:319] 
	I1213 11:47:34.935477  582867 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 11:47:34.935481  582867 kubeadm.go:319] 
	I1213 11:47:34.935561  582867 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 11:47:34.935567  582867 kubeadm.go:319] 
	I1213 11:47:34.935619  582867 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 11:47:34.935694  582867 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 11:47:34.935770  582867 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 11:47:34.935775  582867 kubeadm.go:319] 
	I1213 11:47:34.935860  582867 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 11:47:34.935936  582867 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 11:47:34.935946  582867 kubeadm.go:319] 
	I1213 11:47:34.936031  582867 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ofw7rw.iqam4kpu7xka02jk \
	I1213 11:47:34.936135  582867 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a3798e8f4868c7e4585b4327b4f0565e5125112465fbf26ae2f7c9b7fec5e169 \
	I1213 11:47:34.936179  582867 kubeadm.go:319] 	--control-plane 
	I1213 11:47:34.936183  582867 kubeadm.go:319] 
	I1213 11:47:34.936268  582867 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 11:47:34.936272  582867 kubeadm.go:319] 
	I1213 11:47:34.936354  582867 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ofw7rw.iqam4kpu7xka02jk \
	I1213 11:47:34.936456  582867 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a3798e8f4868c7e4585b4327b4f0565e5125112465fbf26ae2f7c9b7fec5e169 
	I1213 11:47:34.941393  582867 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:47:34.941514  582867 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:47:34.941530  582867 cni.go:84] Creating CNI manager for ""
	I1213 11:47:34.941538  582867 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:47:34.944988  582867 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 11:47:34.947811  582867 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 11:47:34.957784  582867 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1213 11:47:34.957804  582867 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1213 11:47:34.997891  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 11:47:35.976699  582867 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 11:47:35.976759  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:35.976838  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-051699 minikube.k8s.io/updated_at=2025_12_13T11_47_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b minikube.k8s.io/name=old-k8s-version-051699 minikube.k8s.io/primary=true
	I1213 11:47:36.128606  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:36.128675  582867 ops.go:34] apiserver oom_adj: -16
	I1213 11:47:36.628712  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:37.129589  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:37.629611  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:38.128960  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:38.628699  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:39.128686  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:39.628707  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:40.129382  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:40.629657  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:41.128845  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:41.629335  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:42.128715  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:42.629594  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:43.128861  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:43.629453  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:44.128764  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:44.629443  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:45.128966  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:45.628722  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:46.129215  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:46.628747  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:47.128866  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:47.629479  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:48.129336  582867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:47:48.265466  582867 kubeadm.go:1114] duration metric: took 12.288765864s to wait for elevateKubeSystemPrivileges
	I1213 11:47:48.265494  582867 kubeadm.go:403] duration metric: took 29.972242004s to StartCluster
	I1213 11:47:48.265513  582867 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:47:48.265579  582867 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:47:48.266591  582867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:47:48.266835  582867 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:47:48.266970  582867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 11:47:48.267232  582867 config.go:182] Loaded profile config "old-k8s-version-051699": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 11:47:48.267268  582867 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:47:48.267325  582867 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-051699"
	I1213 11:47:48.267339  582867 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-051699"
	I1213 11:47:48.267359  582867 host.go:66] Checking if "old-k8s-version-051699" exists ...
	I1213 11:47:48.267865  582867 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-051699"
	I1213 11:47:48.267883  582867 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-051699"
	I1213 11:47:48.268259  582867 cli_runner.go:164] Run: docker container inspect old-k8s-version-051699 --format={{.State.Status}}
	I1213 11:47:48.268595  582867 cli_runner.go:164] Run: docker container inspect old-k8s-version-051699 --format={{.State.Status}}
	I1213 11:47:48.272450  582867 out.go:179] * Verifying Kubernetes components...
	I1213 11:47:48.279664  582867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:47:48.311427  582867 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:47:48.314823  582867 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:47:48.314855  582867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 11:47:48.314922  582867 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:47:48.318825  582867 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-051699"
	I1213 11:47:48.318865  582867 host.go:66] Checking if "old-k8s-version-051699" exists ...
	I1213 11:47:48.321565  582867 cli_runner.go:164] Run: docker container inspect old-k8s-version-051699 --format={{.State.Status}}
	I1213 11:47:48.348695  582867 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:47:48.367936  582867 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 11:47:48.367963  582867 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 11:47:48.368026  582867 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:47:48.394686  582867 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:47:48.537330  582867 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 11:47:48.547328  582867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:47:48.567125  582867 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:47:48.679917  582867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:47:49.271446  582867 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1213 11:47:49.656066  582867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.108700725s)
	I1213 11:47:49.656118  582867 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.088970074s)
	I1213 11:47:49.657016  582867 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-051699" to be "Ready" ...
	I1213 11:47:49.680325  582867 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1213 11:47:49.683327  582867 addons.go:530] duration metric: took 1.416034335s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1213 11:47:49.775420  582867 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-051699" context rescaled to 1 replicas
	W1213 11:47:51.660991  582867 node_ready.go:57] node "old-k8s-version-051699" has "Ready":"False" status (will retry)
	W1213 11:47:54.160123  582867 node_ready.go:57] node "old-k8s-version-051699" has "Ready":"False" status (will retry)
	W1213 11:47:56.160965  582867 node_ready.go:57] node "old-k8s-version-051699" has "Ready":"False" status (will retry)
	W1213 11:47:58.661659  582867 node_ready.go:57] node "old-k8s-version-051699" has "Ready":"False" status (will retry)
	W1213 11:48:01.160825  582867 node_ready.go:57] node "old-k8s-version-051699" has "Ready":"False" status (will retry)
	I1213 11:48:02.661336  582867 node_ready.go:49] node "old-k8s-version-051699" is "Ready"
	I1213 11:48:02.661371  582867 node_ready.go:38] duration metric: took 13.004325213s for node "old-k8s-version-051699" to be "Ready" ...
	I1213 11:48:02.661386  582867 api_server.go:52] waiting for apiserver process to appear ...
	I1213 11:48:02.661458  582867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:48:02.699601  582867 api_server.go:72] duration metric: took 14.432735717s to wait for apiserver process to appear ...
	I1213 11:48:02.699630  582867 api_server.go:88] waiting for apiserver healthz status ...
	I1213 11:48:02.699656  582867 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 11:48:02.709934  582867 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1213 11:48:02.711389  582867 api_server.go:141] control plane version: v1.28.0
	I1213 11:48:02.711422  582867 api_server.go:131] duration metric: took 11.784269ms to wait for apiserver health ...
	I1213 11:48:02.711432  582867 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 11:48:02.718073  582867 system_pods.go:59] 8 kube-system pods found
	I1213 11:48:02.718113  582867 system_pods.go:61] "coredns-5dd5756b68-w2hls" [ae27a521-38ba-4d9d-8b84-6dfc46e48388] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:48:02.718122  582867 system_pods.go:61] "etcd-old-k8s-version-051699" [09a82a43-a427-4010-818e-7d87644712a1] Running
	I1213 11:48:02.718127  582867 system_pods.go:61] "kindnet-n4ht9" [9abd4e51-c2b6-44ea-8b75-ca7f080370fa] Running
	I1213 11:48:02.718136  582867 system_pods.go:61] "kube-apiserver-old-k8s-version-051699" [e9cac38f-f145-4903-9006-fef05e00da67] Running
	I1213 11:48:02.718142  582867 system_pods.go:61] "kube-controller-manager-old-k8s-version-051699" [fc3d242a-c95b-41f0-a002-ba89275429a3] Running
	I1213 11:48:02.718146  582867 system_pods.go:61] "kube-proxy-qmcm4" [8ab5345a-ad4d-4d16-9728-12b05b662fc6] Running
	I1213 11:48:02.718150  582867 system_pods.go:61] "kube-scheduler-old-k8s-version-051699" [81301076-b694-4be4-a192-41d334af72ad] Running
	I1213 11:48:02.718156  582867 system_pods.go:61] "storage-provisioner" [0fb2212c-6b12-43c4-8d5a-575f27bea92e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 11:48:02.718163  582867 system_pods.go:74] duration metric: took 6.724451ms to wait for pod list to return data ...
	I1213 11:48:02.718177  582867 default_sa.go:34] waiting for default service account to be created ...
	I1213 11:48:02.722197  582867 default_sa.go:45] found service account: "default"
	I1213 11:48:02.722232  582867 default_sa.go:55] duration metric: took 4.048749ms for default service account to be created ...
	I1213 11:48:02.722243  582867 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 11:48:02.726223  582867 system_pods.go:86] 8 kube-system pods found
	I1213 11:48:02.726259  582867 system_pods.go:89] "coredns-5dd5756b68-w2hls" [ae27a521-38ba-4d9d-8b84-6dfc46e48388] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:48:02.726266  582867 system_pods.go:89] "etcd-old-k8s-version-051699" [09a82a43-a427-4010-818e-7d87644712a1] Running
	I1213 11:48:02.726274  582867 system_pods.go:89] "kindnet-n4ht9" [9abd4e51-c2b6-44ea-8b75-ca7f080370fa] Running
	I1213 11:48:02.726278  582867 system_pods.go:89] "kube-apiserver-old-k8s-version-051699" [e9cac38f-f145-4903-9006-fef05e00da67] Running
	I1213 11:48:02.726284  582867 system_pods.go:89] "kube-controller-manager-old-k8s-version-051699" [fc3d242a-c95b-41f0-a002-ba89275429a3] Running
	I1213 11:48:02.726288  582867 system_pods.go:89] "kube-proxy-qmcm4" [8ab5345a-ad4d-4d16-9728-12b05b662fc6] Running
	I1213 11:48:02.726293  582867 system_pods.go:89] "kube-scheduler-old-k8s-version-051699" [81301076-b694-4be4-a192-41d334af72ad] Running
	I1213 11:48:02.726300  582867 system_pods.go:89] "storage-provisioner" [0fb2212c-6b12-43c4-8d5a-575f27bea92e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 11:48:02.726332  582867 retry.go:31] will retry after 192.313349ms: missing components: kube-dns
	I1213 11:48:02.924185  582867 system_pods.go:86] 8 kube-system pods found
	I1213 11:48:02.924223  582867 system_pods.go:89] "coredns-5dd5756b68-w2hls" [ae27a521-38ba-4d9d-8b84-6dfc46e48388] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:48:02.924231  582867 system_pods.go:89] "etcd-old-k8s-version-051699" [09a82a43-a427-4010-818e-7d87644712a1] Running
	I1213 11:48:02.924237  582867 system_pods.go:89] "kindnet-n4ht9" [9abd4e51-c2b6-44ea-8b75-ca7f080370fa] Running
	I1213 11:48:02.924241  582867 system_pods.go:89] "kube-apiserver-old-k8s-version-051699" [e9cac38f-f145-4903-9006-fef05e00da67] Running
	I1213 11:48:02.924246  582867 system_pods.go:89] "kube-controller-manager-old-k8s-version-051699" [fc3d242a-c95b-41f0-a002-ba89275429a3] Running
	I1213 11:48:02.924250  582867 system_pods.go:89] "kube-proxy-qmcm4" [8ab5345a-ad4d-4d16-9728-12b05b662fc6] Running
	I1213 11:48:02.924254  582867 system_pods.go:89] "kube-scheduler-old-k8s-version-051699" [81301076-b694-4be4-a192-41d334af72ad] Running
	I1213 11:48:02.924261  582867 system_pods.go:89] "storage-provisioner" [0fb2212c-6b12-43c4-8d5a-575f27bea92e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 11:48:02.924281  582867 retry.go:31] will retry after 276.144035ms: missing components: kube-dns
	I1213 11:48:03.204435  582867 system_pods.go:86] 8 kube-system pods found
	I1213 11:48:03.204470  582867 system_pods.go:89] "coredns-5dd5756b68-w2hls" [ae27a521-38ba-4d9d-8b84-6dfc46e48388] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:48:03.204481  582867 system_pods.go:89] "etcd-old-k8s-version-051699" [09a82a43-a427-4010-818e-7d87644712a1] Running
	I1213 11:48:03.204518  582867 system_pods.go:89] "kindnet-n4ht9" [9abd4e51-c2b6-44ea-8b75-ca7f080370fa] Running
	I1213 11:48:03.204529  582867 system_pods.go:89] "kube-apiserver-old-k8s-version-051699" [e9cac38f-f145-4903-9006-fef05e00da67] Running
	I1213 11:48:03.204535  582867 system_pods.go:89] "kube-controller-manager-old-k8s-version-051699" [fc3d242a-c95b-41f0-a002-ba89275429a3] Running
	I1213 11:48:03.204539  582867 system_pods.go:89] "kube-proxy-qmcm4" [8ab5345a-ad4d-4d16-9728-12b05b662fc6] Running
	I1213 11:48:03.204544  582867 system_pods.go:89] "kube-scheduler-old-k8s-version-051699" [81301076-b694-4be4-a192-41d334af72ad] Running
	I1213 11:48:03.204550  582867 system_pods.go:89] "storage-provisioner" [0fb2212c-6b12-43c4-8d5a-575f27bea92e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 11:48:03.204568  582867 retry.go:31] will retry after 413.594516ms: missing components: kube-dns
	I1213 11:48:03.623128  582867 system_pods.go:86] 8 kube-system pods found
	I1213 11:48:03.623157  582867 system_pods.go:89] "coredns-5dd5756b68-w2hls" [ae27a521-38ba-4d9d-8b84-6dfc46e48388] Running
	I1213 11:48:03.623167  582867 system_pods.go:89] "etcd-old-k8s-version-051699" [09a82a43-a427-4010-818e-7d87644712a1] Running
	I1213 11:48:03.623172  582867 system_pods.go:89] "kindnet-n4ht9" [9abd4e51-c2b6-44ea-8b75-ca7f080370fa] Running
	I1213 11:48:03.623177  582867 system_pods.go:89] "kube-apiserver-old-k8s-version-051699" [e9cac38f-f145-4903-9006-fef05e00da67] Running
	I1213 11:48:03.623183  582867 system_pods.go:89] "kube-controller-manager-old-k8s-version-051699" [fc3d242a-c95b-41f0-a002-ba89275429a3] Running
	I1213 11:48:03.623186  582867 system_pods.go:89] "kube-proxy-qmcm4" [8ab5345a-ad4d-4d16-9728-12b05b662fc6] Running
	I1213 11:48:03.623191  582867 system_pods.go:89] "kube-scheduler-old-k8s-version-051699" [81301076-b694-4be4-a192-41d334af72ad] Running
	I1213 11:48:03.623195  582867 system_pods.go:89] "storage-provisioner" [0fb2212c-6b12-43c4-8d5a-575f27bea92e] Running
	I1213 11:48:03.623203  582867 system_pods.go:126] duration metric: took 900.954657ms to wait for k8s-apps to be running ...
	I1213 11:48:03.623215  582867 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 11:48:03.623271  582867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:48:03.648992  582867 system_svc.go:56] duration metric: took 25.766746ms WaitForService to wait for kubelet
	I1213 11:48:03.649074  582867 kubeadm.go:587] duration metric: took 15.382214886s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:48:03.649127  582867 node_conditions.go:102] verifying NodePressure condition ...
	I1213 11:48:03.652669  582867 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 11:48:03.652705  582867 node_conditions.go:123] node cpu capacity is 2
	I1213 11:48:03.652720  582867 node_conditions.go:105] duration metric: took 3.580513ms to run NodePressure ...
	I1213 11:48:03.652756  582867 start.go:242] waiting for startup goroutines ...
	I1213 11:48:03.652763  582867 start.go:247] waiting for cluster config update ...
	I1213 11:48:03.652775  582867 start.go:256] writing updated cluster config ...
	I1213 11:48:03.653072  582867 ssh_runner.go:195] Run: rm -f paused
	I1213 11:48:03.658158  582867 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 11:48:03.662747  582867 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-w2hls" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:48:03.668476  582867 pod_ready.go:94] pod "coredns-5dd5756b68-w2hls" is "Ready"
	I1213 11:48:03.668510  582867 pod_ready.go:86] duration metric: took 5.736498ms for pod "coredns-5dd5756b68-w2hls" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:48:03.671774  582867 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:48:03.677122  582867 pod_ready.go:94] pod "etcd-old-k8s-version-051699" is "Ready"
	I1213 11:48:03.677163  582867 pod_ready.go:86] duration metric: took 5.349476ms for pod "etcd-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:48:03.681399  582867 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:48:03.686997  582867 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-051699" is "Ready"
	I1213 11:48:03.687022  582867 pod_ready.go:86] duration metric: took 5.599267ms for pod "kube-apiserver-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:48:03.690849  582867 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:48:04.062346  582867 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-051699" is "Ready"
	I1213 11:48:04.062378  582867 pod_ready.go:86] duration metric: took 371.496858ms for pod "kube-controller-manager-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:48:04.263369  582867 pod_ready.go:83] waiting for pod "kube-proxy-qmcm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:48:04.662838  582867 pod_ready.go:94] pod "kube-proxy-qmcm4" is "Ready"
	I1213 11:48:04.662866  582867 pod_ready.go:86] duration metric: took 399.470358ms for pod "kube-proxy-qmcm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:48:04.863485  582867 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:48:05.262329  582867 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-051699" is "Ready"
	I1213 11:48:05.262350  582867 pod_ready.go:86] duration metric: took 398.813049ms for pod "kube-scheduler-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:48:05.262363  582867 pod_ready.go:40] duration metric: took 1.604172974s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 11:48:05.329879  582867 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1213 11:48:05.333127  582867 out.go:203] 
	W1213 11:48:05.336347  582867 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1213 11:48:05.339273  582867 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1213 11:48:05.343111  582867 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-051699" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 11:48:02 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:02.802697277Z" level=info msg="Created container 2bd788cd4722dc0db93e9fb24eab1411f6a404db149b5dc9d74a099053212f2e: kube-system/coredns-5dd5756b68-w2hls/coredns" id=60f3b54a-0c5a-4022-9086-7a1d930dc68d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:48:02 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:02.803601021Z" level=info msg="Starting container: 2bd788cd4722dc0db93e9fb24eab1411f6a404db149b5dc9d74a099053212f2e" id=8bd5b0cb-afd5-45b7-966e-281ef1cbe5a2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:48:02 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:02.805167136Z" level=info msg="Started container" PID=1957 containerID=2bd788cd4722dc0db93e9fb24eab1411f6a404db149b5dc9d74a099053212f2e description=kube-system/coredns-5dd5756b68-w2hls/coredns id=8bd5b0cb-afd5-45b7-966e-281ef1cbe5a2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=42b054a86132fd2d5d8b6c6815f329c93fdc38f886f4a45a7dc99a9b89123e3c
	Dec 13 11:48:05 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:05.855159381Z" level=info msg="Running pod sandbox: default/busybox/POD" id=67d47fb4-c36c-41e8-a5f2-1254f3a8c02c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 11:48:05 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:05.85524309Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:48:05 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:05.861436313Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4e95cc6ec22b554374f1682ed9115ba875032dce2e17a3a462ba2731a058beda UID:c750a8f1-85bf-45da-b73b-d38717856602 NetNS:/var/run/netns/89576a4a-88ab-489f-b4ed-161c06740d22 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001416d70}] Aliases:map[]}"
	Dec 13 11:48:05 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:05.861513926Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 13 11:48:05 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:05.8748276Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4e95cc6ec22b554374f1682ed9115ba875032dce2e17a3a462ba2731a058beda UID:c750a8f1-85bf-45da-b73b-d38717856602 NetNS:/var/run/netns/89576a4a-88ab-489f-b4ed-161c06740d22 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001416d70}] Aliases:map[]}"
	Dec 13 11:48:05 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:05.875117957Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 13 11:48:05 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:05.88104484Z" level=info msg="Ran pod sandbox 4e95cc6ec22b554374f1682ed9115ba875032dce2e17a3a462ba2731a058beda with infra container: default/busybox/POD" id=67d47fb4-c36c-41e8-a5f2-1254f3a8c02c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 11:48:05 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:05.882990921Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=860e3226-058d-4c36-99cc-eab7ff480ea7 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:48:05 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:05.883130236Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=860e3226-058d-4c36-99cc-eab7ff480ea7 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:48:05 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:05.883171673Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=860e3226-058d-4c36-99cc-eab7ff480ea7 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:48:05 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:05.883916088Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ba66afb7-fa4d-431e-84fd-3fcfc5a22120 name=/runtime.v1.ImageService/PullImage
	Dec 13 11:48:05 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:05.886839332Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 13 11:48:07 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:07.956176269Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=ba66afb7-fa4d-431e-84fd-3fcfc5a22120 name=/runtime.v1.ImageService/PullImage
	Dec 13 11:48:07 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:07.959327018Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4bd9b537-89e3-4415-b4e3-4dab48ab05f9 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:48:07 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:07.961965469Z" level=info msg="Creating container: default/busybox/busybox" id=a87d0737-5c9a-490d-9fb4-493eaab8ac56 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:48:07 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:07.962215883Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:48:07 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:07.967148733Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:48:07 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:07.96782178Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:48:07 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:07.985233581Z" level=info msg="Created container 29f127ac9a587cebd3b282a930d3686e0166e9007dcd09f8a2229b56194a0f9b: default/busybox/busybox" id=a87d0737-5c9a-490d-9fb4-493eaab8ac56 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:48:07 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:07.986450205Z" level=info msg="Starting container: 29f127ac9a587cebd3b282a930d3686e0166e9007dcd09f8a2229b56194a0f9b" id=d5e1f8c5-0bc2-435b-ab80-6b24a23c5b1e name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:48:07 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:07.988673884Z" level=info msg="Started container" PID=2014 containerID=29f127ac9a587cebd3b282a930d3686e0166e9007dcd09f8a2229b56194a0f9b description=default/busybox/busybox id=d5e1f8c5-0bc2-435b-ab80-6b24a23c5b1e name=/runtime.v1.RuntimeService/StartContainer sandboxID=4e95cc6ec22b554374f1682ed9115ba875032dce2e17a3a462ba2731a058beda
	Dec 13 11:48:14 old-k8s-version-051699 crio[835]: time="2025-12-13T11:48:14.737234684Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	29f127ac9a587       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   4e95cc6ec22b5       busybox                                          default
	2bd788cd4722d       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   42b054a86132f       coredns-5dd5756b68-w2hls                         kube-system
	eefce04ef2c9d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   85dc0c17c8725       storage-provisioner                              kube-system
	7a0508258cd7e       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    24 seconds ago      Running             kindnet-cni               0                   3befb0316b905       kindnet-n4ht9                                    kube-system
	48970f470fbe5       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   935b4ff493067       kube-proxy-qmcm4                                 kube-system
	554fc555cfbd3       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      49 seconds ago      Running             kube-apiserver            0                   bfaf100a56591       kube-apiserver-old-k8s-version-051699            kube-system
	d9ed0ab48c814       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      49 seconds ago      Running             etcd                      0                   a160aa32d1074       etcd-old-k8s-version-051699                      kube-system
	5f52e207cdfe3       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      49 seconds ago      Running             kube-scheduler            0                   e7f52e3311410       kube-scheduler-old-k8s-version-051699            kube-system
	2eac62c5940e1       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      49 seconds ago      Running             kube-controller-manager   0                   0c6cb0035b32e       kube-controller-manager-old-k8s-version-051699   kube-system
	
	
	==> coredns [2bd788cd4722dc0db93e9fb24eab1411f6a404db149b5dc9d74a099053212f2e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60882 - 17945 "HINFO IN 2184470421372324090.5054057235733689529. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017478198s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-051699
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-051699
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
	                    minikube.k8s.io/name=old-k8s-version-051699
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T11_47_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 11:47:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-051699
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 11:48:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 11:48:05 +0000   Sat, 13 Dec 2025 11:47:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 11:48:05 +0000   Sat, 13 Dec 2025 11:47:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 11:48:05 +0000   Sat, 13 Dec 2025 11:47:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 11:48:05 +0000   Sat, 13 Dec 2025 11:48:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-051699
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                1c639966-deba-4cb5-95e6-2e08822bad87
	  Boot ID:                    9bd24839-35d9-4392-a0e0-b2e0b9823eaa
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-w2hls                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-old-k8s-version-051699                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         43s
	  kube-system                 kindnet-n4ht9                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-051699             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-051699    200m (10%)    0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-proxy-qmcm4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-051699             100m (5%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  NodeHasSufficientMemory  50s (x8 over 50s)  kubelet          Node old-k8s-version-051699 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 50s)  kubelet          Node old-k8s-version-051699 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x8 over 50s)  kubelet          Node old-k8s-version-051699 status is now: NodeHasSufficientPID
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-051699 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-051699 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-051699 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-051699 event: Registered Node old-k8s-version-051699 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-051699 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec13 11:14] overlayfs: idmapped layers are currently not supported
	[ +27.964028] overlayfs: idmapped layers are currently not supported
	[Dec13 11:16] overlayfs: idmapped layers are currently not supported
	[Dec13 11:20] overlayfs: idmapped layers are currently not supported
	[ +35.182226] overlayfs: idmapped layers are currently not supported
	[Dec13 11:21] overlayfs: idmapped layers are currently not supported
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	[Dec13 11:46] overlayfs: idmapped layers are currently not supported
	[ +24.639766] overlayfs: idmapped layers are currently not supported
	[ +18.732422] overlayfs: idmapped layers are currently not supported
	[Dec13 11:47] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d9ed0ab48c8140d3e73137008e9df9e014b188831e525dc47120d4a4a447a442] <==
	{"level":"info","ts":"2025-12-13T11:47:27.442551Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-13T11:47:27.442586Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-13T11:47:27.442594Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-13T11:47:27.443061Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-13T11:47:27.443087Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-13T11:47:27.455789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-13T11:47:27.455937Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-13T11:47:27.991091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-13T11:47:27.991216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-13T11:47:27.991265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-13T11:47:27.991324Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-13T11:47:27.991356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-13T11:47:27.991406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-13T11:47:27.991438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-13T11:47:27.995317Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-051699 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-13T11:47:27.995541Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-13T11:47:27.995652Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-13T11:47:27.996231Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-13T11:47:27.996284Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-13T11:47:27.997Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-13T11:47:27.997115Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-13T11:47:27.999986Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-13T11:47:28.026524Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-13T11:47:28.029914Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-13T11:47:28.064715Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 11:48:16 up  3:30,  0 user,  load average: 2.44, 2.40, 2.10
	Linux old-k8s-version-051699 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7a0508258cd7ee6b7ab4c9001658bb1a59da8a7d304c17772266179c2dbe4ffa] <==
	I1213 11:47:51.727590       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 11:47:51.727830       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 11:47:51.727965       1 main.go:148] setting mtu 1500 for CNI 
	I1213 11:47:51.727985       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 11:47:51.727995       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T11:47:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 11:47:52.021013       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 11:47:52.021050       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 11:47:52.021061       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 11:47:52.021879       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 11:47:52.321239       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 11:47:52.321344       1 metrics.go:72] Registering metrics
	I1213 11:47:52.321431       1 controller.go:711] "Syncing nftables rules"
	I1213 11:48:02.023607       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:48:02.023672       1 main.go:301] handling current node
	I1213 11:48:12.022766       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:48:12.022810       1 main.go:301] handling current node
	
	
	==> kube-apiserver [554fc555cfbd30a56d1b1dc593dfab1e0bd7591c5665c5f2dfa865432f1853c3] <==
	I1213 11:47:31.797860       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 11:47:31.799199       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1213 11:47:31.799226       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1213 11:47:31.800900       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1213 11:47:31.800946       1 aggregator.go:166] initial CRD sync complete...
	I1213 11:47:31.800962       1 autoregister_controller.go:141] Starting autoregister controller
	I1213 11:47:31.800970       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 11:47:31.800977       1 cache.go:39] Caches are synced for autoregister controller
	I1213 11:47:31.804540       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 11:47:31.807483       1 shared_informer.go:318] Caches are synced for configmaps
	I1213 11:47:32.604339       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1213 11:47:32.614029       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1213 11:47:32.614116       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 11:47:33.214138       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 11:47:33.266763       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 11:47:33.383604       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1213 11:47:33.403337       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1213 11:47:33.404470       1 controller.go:624] quota admission added evaluator for: endpoints
	I1213 11:47:33.409115       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 11:47:33.761042       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1213 11:47:34.831760       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1213 11:47:34.848144       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1213 11:47:34.860973       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1213 11:47:47.554011       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1213 11:47:47.811946       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2eac62c5940e1162da8e5994122078bfcf684a6728f63cf5ee00010da9c0db9a] <==
	I1213 11:47:47.298166       1 shared_informer.go:318] Caches are synced for persistent volume
	I1213 11:47:47.311828       1 shared_informer.go:318] Caches are synced for namespace
	I1213 11:47:47.339285       1 shared_informer.go:318] Caches are synced for service account
	I1213 11:47:47.567080       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qmcm4"
	I1213 11:47:47.574029       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-n4ht9"
	I1213 11:47:47.680857       1 shared_informer.go:318] Caches are synced for garbage collector
	I1213 11:47:47.680893       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1213 11:47:47.707343       1 shared_informer.go:318] Caches are synced for garbage collector
	I1213 11:47:47.817540       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1213 11:47:48.160851       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-rqpst"
	I1213 11:47:48.180551       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-w2hls"
	I1213 11:47:48.207645       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="390.175525ms"
	I1213 11:47:48.230273       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.352018ms"
	I1213 11:47:48.230649       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="215.28µs"
	I1213 11:47:49.345177       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1213 11:47:49.386017       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-rqpst"
	I1213 11:47:49.407376       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.09341ms"
	I1213 11:47:49.429252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.826222ms"
	I1213 11:47:49.468288       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="38.987461ms"
	I1213 11:47:49.468386       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.429µs"
	I1213 11:48:02.405877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.814µs"
	I1213 11:48:02.425825       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="345.266µs"
	I1213 11:48:03.360006       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.336415ms"
	I1213 11:48:03.361095       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.897µs"
	I1213 11:48:07.170388       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [48970f470fbe5953d8bfa3c677e38a3689f6d40d015c9bbb98de3d108c8b8946] <==
	I1213 11:47:49.268073       1 server_others.go:69] "Using iptables proxy"
	I1213 11:47:49.356121       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1213 11:47:49.482483       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 11:47:49.484295       1 server_others.go:152] "Using iptables Proxier"
	I1213 11:47:49.484331       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1213 11:47:49.484338       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1213 11:47:49.484364       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1213 11:47:49.484549       1 server.go:846] "Version info" version="v1.28.0"
	I1213 11:47:49.484558       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:47:49.485392       1 config.go:188] "Starting service config controller"
	I1213 11:47:49.485412       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1213 11:47:49.485432       1 config.go:97] "Starting endpoint slice config controller"
	I1213 11:47:49.485435       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1213 11:47:49.485858       1 config.go:315] "Starting node config controller"
	I1213 11:47:49.485864       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1213 11:47:49.585896       1 shared_informer.go:318] Caches are synced for node config
	I1213 11:47:49.585929       1 shared_informer.go:318] Caches are synced for service config
	I1213 11:47:49.585964       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5f52e207cdfe3d3cf48ecaee6b0da2a7a4c19e1836109d351b80567df8b88931] <==
	W1213 11:47:31.782572       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1213 11:47:31.783508       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1213 11:47:31.782607       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1213 11:47:31.783595       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1213 11:47:31.782648       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1213 11:47:31.783689       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1213 11:47:31.782690       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1213 11:47:31.783764       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1213 11:47:32.673719       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1213 11:47:32.673833       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1213 11:47:32.735869       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1213 11:47:32.736013       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1213 11:47:32.755477       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1213 11:47:32.755604       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1213 11:47:32.825296       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1213 11:47:32.825401       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1213 11:47:32.858428       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1213 11:47:32.858661       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1213 11:47:32.858428       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1213 11:47:32.858792       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1213 11:47:32.883112       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 11:47:32.883373       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1213 11:47:33.140256       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1213 11:47:33.140362       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1213 11:47:35.861793       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 13 11:47:47 old-k8s-version-051699 kubelet[1387]: I1213 11:47:47.681615    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ab5345a-ad4d-4d16-9728-12b05b662fc6-lib-modules\") pod \"kube-proxy-qmcm4\" (UID: \"8ab5345a-ad4d-4d16-9728-12b05b662fc6\") " pod="kube-system/kube-proxy-qmcm4"
	Dec 13 11:47:47 old-k8s-version-051699 kubelet[1387]: I1213 11:47:47.681647    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfvkf\" (UniqueName: \"kubernetes.io/projected/8ab5345a-ad4d-4d16-9728-12b05b662fc6-kube-api-access-xfvkf\") pod \"kube-proxy-qmcm4\" (UID: \"8ab5345a-ad4d-4d16-9728-12b05b662fc6\") " pod="kube-system/kube-proxy-qmcm4"
	Dec 13 11:47:47 old-k8s-version-051699 kubelet[1387]: I1213 11:47:47.681674    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9abd4e51-c2b6-44ea-8b75-ca7f080370fa-cni-cfg\") pod \"kindnet-n4ht9\" (UID: \"9abd4e51-c2b6-44ea-8b75-ca7f080370fa\") " pod="kube-system/kindnet-n4ht9"
	Dec 13 11:47:47 old-k8s-version-051699 kubelet[1387]: I1213 11:47:47.682500    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9abd4e51-c2b6-44ea-8b75-ca7f080370fa-xtables-lock\") pod \"kindnet-n4ht9\" (UID: \"9abd4e51-c2b6-44ea-8b75-ca7f080370fa\") " pod="kube-system/kindnet-n4ht9"
	Dec 13 11:47:47 old-k8s-version-051699 kubelet[1387]: I1213 11:47:47.682600    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9abd4e51-c2b6-44ea-8b75-ca7f080370fa-lib-modules\") pod \"kindnet-n4ht9\" (UID: \"9abd4e51-c2b6-44ea-8b75-ca7f080370fa\") " pod="kube-system/kindnet-n4ht9"
	Dec 13 11:47:47 old-k8s-version-051699 kubelet[1387]: I1213 11:47:47.682658    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8ab5345a-ad4d-4d16-9728-12b05b662fc6-kube-proxy\") pod \"kube-proxy-qmcm4\" (UID: \"8ab5345a-ad4d-4d16-9728-12b05b662fc6\") " pod="kube-system/kube-proxy-qmcm4"
	Dec 13 11:47:47 old-k8s-version-051699 kubelet[1387]: I1213 11:47:47.682696    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ab5345a-ad4d-4d16-9728-12b05b662fc6-xtables-lock\") pod \"kube-proxy-qmcm4\" (UID: \"8ab5345a-ad4d-4d16-9728-12b05b662fc6\") " pod="kube-system/kube-proxy-qmcm4"
	Dec 13 11:47:48 old-k8s-version-051699 kubelet[1387]: W1213 11:47:48.800744    1387 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b/crio-3befb0316b905d949eb745c0301a2e9fa8e6bc1d6f59025f1029ae4e553cfb05 WatchSource:0}: Error finding container 3befb0316b905d949eb745c0301a2e9fa8e6bc1d6f59025f1029ae4e553cfb05: Status 404 returned error can't find the container with id 3befb0316b905d949eb745c0301a2e9fa8e6bc1d6f59025f1029ae4e553cfb05
	Dec 13 11:47:49 old-k8s-version-051699 kubelet[1387]: W1213 11:47:49.083214    1387 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b/crio-935b4ff493067ac56fa9d177f997e037ce19eca670cbed270bd3982ba1452964 WatchSource:0}: Error finding container 935b4ff493067ac56fa9d177f997e037ce19eca670cbed270bd3982ba1452964: Status 404 returned error can't find the container with id 935b4ff493067ac56fa9d177f997e037ce19eca670cbed270bd3982ba1452964
	Dec 13 11:47:52 old-k8s-version-051699 kubelet[1387]: I1213 11:47:52.312034    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qmcm4" podStartSLOduration=5.311989044 podCreationTimestamp="2025-12-13 11:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 11:47:49.346076341 +0000 UTC m=+14.551270855" watchObservedRunningTime="2025-12-13 11:47:52.311989044 +0000 UTC m=+17.517183592"
	Dec 13 11:47:55 old-k8s-version-051699 kubelet[1387]: I1213 11:47:55.048899    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-n4ht9" podStartSLOduration=5.223425441 podCreationTimestamp="2025-12-13 11:47:47 +0000 UTC" firstStartedPulling="2025-12-13 11:47:48.811075363 +0000 UTC m=+14.016269861" lastFinishedPulling="2025-12-13 11:47:51.636503812 +0000 UTC m=+16.841698318" observedRunningTime="2025-12-13 11:47:52.31266219 +0000 UTC m=+17.517856696" watchObservedRunningTime="2025-12-13 11:47:55.048853898 +0000 UTC m=+20.254048396"
	Dec 13 11:48:02 old-k8s-version-051699 kubelet[1387]: I1213 11:48:02.372921    1387 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 13 11:48:02 old-k8s-version-051699 kubelet[1387]: I1213 11:48:02.405045    1387 topology_manager.go:215] "Topology Admit Handler" podUID="ae27a521-38ba-4d9d-8b84-6dfc46e48388" podNamespace="kube-system" podName="coredns-5dd5756b68-w2hls"
	Dec 13 11:48:02 old-k8s-version-051699 kubelet[1387]: I1213 11:48:02.411173    1387 topology_manager.go:215] "Topology Admit Handler" podUID="0fb2212c-6b12-43c4-8d5a-575f27bea92e" podNamespace="kube-system" podName="storage-provisioner"
	Dec 13 11:48:02 old-k8s-version-051699 kubelet[1387]: I1213 11:48:02.516181    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrv7k\" (UniqueName: \"kubernetes.io/projected/0fb2212c-6b12-43c4-8d5a-575f27bea92e-kube-api-access-qrv7k\") pod \"storage-provisioner\" (UID: \"0fb2212c-6b12-43c4-8d5a-575f27bea92e\") " pod="kube-system/storage-provisioner"
	Dec 13 11:48:02 old-k8s-version-051699 kubelet[1387]: I1213 11:48:02.516239    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0fb2212c-6b12-43c4-8d5a-575f27bea92e-tmp\") pod \"storage-provisioner\" (UID: \"0fb2212c-6b12-43c4-8d5a-575f27bea92e\") " pod="kube-system/storage-provisioner"
	Dec 13 11:48:02 old-k8s-version-051699 kubelet[1387]: I1213 11:48:02.516269    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae27a521-38ba-4d9d-8b84-6dfc46e48388-config-volume\") pod \"coredns-5dd5756b68-w2hls\" (UID: \"ae27a521-38ba-4d9d-8b84-6dfc46e48388\") " pod="kube-system/coredns-5dd5756b68-w2hls"
	Dec 13 11:48:02 old-k8s-version-051699 kubelet[1387]: I1213 11:48:02.516297    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44grb\" (UniqueName: \"kubernetes.io/projected/ae27a521-38ba-4d9d-8b84-6dfc46e48388-kube-api-access-44grb\") pod \"coredns-5dd5756b68-w2hls\" (UID: \"ae27a521-38ba-4d9d-8b84-6dfc46e48388\") " pod="kube-system/coredns-5dd5756b68-w2hls"
	Dec 13 11:48:02 old-k8s-version-051699 kubelet[1387]: W1213 11:48:02.730728    1387 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b/crio-85dc0c17c87254a20e953fb60483066ee27348c58a696174ae69fa6a767cf85a WatchSource:0}: Error finding container 85dc0c17c87254a20e953fb60483066ee27348c58a696174ae69fa6a767cf85a: Status 404 returned error can't find the container with id 85dc0c17c87254a20e953fb60483066ee27348c58a696174ae69fa6a767cf85a
	Dec 13 11:48:02 old-k8s-version-051699 kubelet[1387]: W1213 11:48:02.768027    1387 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b/crio-42b054a86132fd2d5d8b6c6815f329c93fdc38f886f4a45a7dc99a9b89123e3c WatchSource:0}: Error finding container 42b054a86132fd2d5d8b6c6815f329c93fdc38f886f4a45a7dc99a9b89123e3c: Status 404 returned error can't find the container with id 42b054a86132fd2d5d8b6c6815f329c93fdc38f886f4a45a7dc99a9b89123e3c
	Dec 13 11:48:03 old-k8s-version-051699 kubelet[1387]: I1213 11:48:03.346158    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.346111642 podCreationTimestamp="2025-12-13 11:47:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 11:48:03.327881915 +0000 UTC m=+28.533076421" watchObservedRunningTime="2025-12-13 11:48:03.346111642 +0000 UTC m=+28.551306140"
	Dec 13 11:48:05 old-k8s-version-051699 kubelet[1387]: I1213 11:48:05.553069    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-w2hls" podStartSLOduration=17.553024367 podCreationTimestamp="2025-12-13 11:47:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 11:48:03.347151576 +0000 UTC m=+28.552346073" watchObservedRunningTime="2025-12-13 11:48:05.553024367 +0000 UTC m=+30.758218873"
	Dec 13 11:48:05 old-k8s-version-051699 kubelet[1387]: I1213 11:48:05.553461    1387 topology_manager.go:215] "Topology Admit Handler" podUID="c750a8f1-85bf-45da-b73b-d38717856602" podNamespace="default" podName="busybox"
	Dec 13 11:48:05 old-k8s-version-051699 kubelet[1387]: I1213 11:48:05.637745    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxtm7\" (UniqueName: \"kubernetes.io/projected/c750a8f1-85bf-45da-b73b-d38717856602-kube-api-access-pxtm7\") pod \"busybox\" (UID: \"c750a8f1-85bf-45da-b73b-d38717856602\") " pod="default/busybox"
	Dec 13 11:48:08 old-k8s-version-051699 kubelet[1387]: I1213 11:48:08.341960    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.268441219 podCreationTimestamp="2025-12-13 11:48:05 +0000 UTC" firstStartedPulling="2025-12-13 11:48:05.883331403 +0000 UTC m=+31.088525901" lastFinishedPulling="2025-12-13 11:48:07.956777955 +0000 UTC m=+33.161972453" observedRunningTime="2025-12-13 11:48:08.34174327 +0000 UTC m=+33.546937768" watchObservedRunningTime="2025-12-13 11:48:08.341887771 +0000 UTC m=+33.547082277"
	
	
	==> storage-provisioner [eefce04ef2c9d6fa8ccfa4abe61aefd7538cad0910d4d56132e2c1ea23629b92] <==
	I1213 11:48:02.842902       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 11:48:02.859257       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 11:48:02.859310       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 11:48:02.867820       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 11:48:02.868035       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-051699_1a91c424-614f-46d3-a97a-2f8ede72c69f!
	I1213 11:48:02.869335       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c9bc71b9-ee13-4321-8101-70d105400c33", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-051699_1a91c424-614f-46d3-a97a-2f8ede72c69f became leader
	I1213 11:48:02.968448       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-051699_1a91c424-614f-46d3-a97a-2f8ede72c69f!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-051699 -n old-k8s-version-051699
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-051699 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-051699 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-051699 --alsologtostderr -v=1: exit status 80 (1.863448669s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-051699 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:49:35.862119  588671 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:49:35.862288  588671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:49:35.862319  588671 out.go:374] Setting ErrFile to fd 2...
	I1213 11:49:35.862341  588671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:49:35.862646  588671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:49:35.862940  588671 out.go:368] Setting JSON to false
	I1213 11:49:35.863007  588671 mustload.go:66] Loading cluster: old-k8s-version-051699
	I1213 11:49:35.863429  588671 config.go:182] Loaded profile config "old-k8s-version-051699": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 11:49:35.863972  588671 cli_runner.go:164] Run: docker container inspect old-k8s-version-051699 --format={{.State.Status}}
	I1213 11:49:35.882496  588671 host.go:66] Checking if "old-k8s-version-051699" exists ...
	I1213 11:49:35.882828  588671 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:49:35.939125  588671 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 11:49:35.92896753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:49:35.939885  588671 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765481609-22101/minikube-v1.37.0-1765481609-22101-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765481609-22101-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-051699 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1213 11:49:35.944962  588671 out.go:179] * Pausing node old-k8s-version-051699 ... 
	I1213 11:49:35.949828  588671 host.go:66] Checking if "old-k8s-version-051699" exists ...
	I1213 11:49:35.950238  588671 ssh_runner.go:195] Run: systemctl --version
	I1213 11:49:35.950293  588671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:49:35.968498  588671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:49:36.099403  588671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:49:36.116063  588671 pause.go:52] kubelet running: true
	I1213 11:49:36.116172  588671 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 11:49:36.366753  588671 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 11:49:36.366836  588671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 11:49:36.438111  588671 cri.go:89] found id: "b59f614a2a32ae75a805e01d493986cd39e6a71e3aed6253427f6024f7790b2e"
	I1213 11:49:36.438133  588671 cri.go:89] found id: "13641f9582975492cba357f81a47d368b97b186827c5e55ee5221857ac9af3cb"
	I1213 11:49:36.438138  588671 cri.go:89] found id: "252ea1238e50c637074d8e48f4c01b8d464d784a390cecebc97a291bc3d45d6c"
	I1213 11:49:36.438143  588671 cri.go:89] found id: "c2561ac9d9b2c9534552136b22e55b5102dd7123c91b9a47f4f6f0d17845ca3c"
	I1213 11:49:36.438146  588671 cri.go:89] found id: "2e30c95faed4944877f25b1255472ca14b20fc12fbc0176060a435a46b1d39b3"
	I1213 11:49:36.438150  588671 cri.go:89] found id: "d1083a84171e8b109d5205f26f918e35b5462caf78b728353423ce03b323617e"
	I1213 11:49:36.438153  588671 cri.go:89] found id: "938d6fe9735fccb0295851287a2c46be9275edddee6d4e17fa2757fee05fb949"
	I1213 11:49:36.438156  588671 cri.go:89] found id: "c6727afa741cfa7ae0dee9ad26ea0a874d1c2fc01c26783d2fd633cfe3f8989c"
	I1213 11:49:36.438180  588671 cri.go:89] found id: "f0ad5667d7443959d98e2dd8e90cf8d0216a7d83e917e88c8d3bcb7016cb1041"
	I1213 11:49:36.438194  588671 cri.go:89] found id: "db85d9cc53646d47a901958e38428f5fda6f7f752e06768df8eca15233ccc90f"
	I1213 11:49:36.438198  588671 cri.go:89] found id: "ea0421efb7eb56b9e66dde1a483a1434ed846923a3c65b69a736c2f66a0ecb91"
	I1213 11:49:36.438201  588671 cri.go:89] found id: ""
	I1213 11:49:36.438264  588671 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 11:49:36.451179  588671 retry.go:31] will retry after 233.670775ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:49:36Z" level=error msg="open /run/runc: no such file or directory"
	I1213 11:49:36.685546  588671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:49:36.699029  588671 pause.go:52] kubelet running: false
	I1213 11:49:36.699093  588671 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 11:49:36.880199  588671 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 11:49:36.880280  588671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 11:49:36.948080  588671 cri.go:89] found id: "b59f614a2a32ae75a805e01d493986cd39e6a71e3aed6253427f6024f7790b2e"
	I1213 11:49:36.948103  588671 cri.go:89] found id: "13641f9582975492cba357f81a47d368b97b186827c5e55ee5221857ac9af3cb"
	I1213 11:49:36.948108  588671 cri.go:89] found id: "252ea1238e50c637074d8e48f4c01b8d464d784a390cecebc97a291bc3d45d6c"
	I1213 11:49:36.948112  588671 cri.go:89] found id: "c2561ac9d9b2c9534552136b22e55b5102dd7123c91b9a47f4f6f0d17845ca3c"
	I1213 11:49:36.948116  588671 cri.go:89] found id: "2e30c95faed4944877f25b1255472ca14b20fc12fbc0176060a435a46b1d39b3"
	I1213 11:49:36.948119  588671 cri.go:89] found id: "d1083a84171e8b109d5205f26f918e35b5462caf78b728353423ce03b323617e"
	I1213 11:49:36.948147  588671 cri.go:89] found id: "938d6fe9735fccb0295851287a2c46be9275edddee6d4e17fa2757fee05fb949"
	I1213 11:49:36.948157  588671 cri.go:89] found id: "c6727afa741cfa7ae0dee9ad26ea0a874d1c2fc01c26783d2fd633cfe3f8989c"
	I1213 11:49:36.948160  588671 cri.go:89] found id: "f0ad5667d7443959d98e2dd8e90cf8d0216a7d83e917e88c8d3bcb7016cb1041"
	I1213 11:49:36.948167  588671 cri.go:89] found id: "db85d9cc53646d47a901958e38428f5fda6f7f752e06768df8eca15233ccc90f"
	I1213 11:49:36.948179  588671 cri.go:89] found id: "ea0421efb7eb56b9e66dde1a483a1434ed846923a3c65b69a736c2f66a0ecb91"
	I1213 11:49:36.948183  588671 cri.go:89] found id: ""
	I1213 11:49:36.948242  588671 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 11:49:36.959374  588671 retry.go:31] will retry after 431.990126ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:49:36Z" level=error msg="open /run/runc: no such file or directory"
	I1213 11:49:37.392045  588671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:49:37.405671  588671 pause.go:52] kubelet running: false
	I1213 11:49:37.405807  588671 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 11:49:37.566928  588671 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 11:49:37.567047  588671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 11:49:37.643942  588671 cri.go:89] found id: "b59f614a2a32ae75a805e01d493986cd39e6a71e3aed6253427f6024f7790b2e"
	I1213 11:49:37.644019  588671 cri.go:89] found id: "13641f9582975492cba357f81a47d368b97b186827c5e55ee5221857ac9af3cb"
	I1213 11:49:37.644045  588671 cri.go:89] found id: "252ea1238e50c637074d8e48f4c01b8d464d784a390cecebc97a291bc3d45d6c"
	I1213 11:49:37.644066  588671 cri.go:89] found id: "c2561ac9d9b2c9534552136b22e55b5102dd7123c91b9a47f4f6f0d17845ca3c"
	I1213 11:49:37.644097  588671 cri.go:89] found id: "2e30c95faed4944877f25b1255472ca14b20fc12fbc0176060a435a46b1d39b3"
	I1213 11:49:37.644115  588671 cri.go:89] found id: "d1083a84171e8b109d5205f26f918e35b5462caf78b728353423ce03b323617e"
	I1213 11:49:37.644144  588671 cri.go:89] found id: "938d6fe9735fccb0295851287a2c46be9275edddee6d4e17fa2757fee05fb949"
	I1213 11:49:37.644188  588671 cri.go:89] found id: "c6727afa741cfa7ae0dee9ad26ea0a874d1c2fc01c26783d2fd633cfe3f8989c"
	I1213 11:49:37.644206  588671 cri.go:89] found id: "f0ad5667d7443959d98e2dd8e90cf8d0216a7d83e917e88c8d3bcb7016cb1041"
	I1213 11:49:37.644228  588671 cri.go:89] found id: "db85d9cc53646d47a901958e38428f5fda6f7f752e06768df8eca15233ccc90f"
	I1213 11:49:37.644268  588671 cri.go:89] found id: "ea0421efb7eb56b9e66dde1a483a1434ed846923a3c65b69a736c2f66a0ecb91"
	I1213 11:49:37.644297  588671 cri.go:89] found id: ""
	I1213 11:49:37.644377  588671 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 11:49:37.659098  588671 out.go:203] 
	W1213 11:49:37.662092  588671 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:49:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:49:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 11:49:37.662181  588671 out.go:285] * 
	* 
	W1213 11:49:37.668513  588671 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:49:37.671419  588671 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-051699 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-051699
helpers_test.go:244: (dbg) docker inspect old-k8s-version-051699:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b",
	        "Created": "2025-12-13T11:47:09.22535414Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 586575,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:48:29.909769123Z",
	            "FinishedAt": "2025-12-13T11:48:29.088172023Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b/hosts",
	        "LogPath": "/var/lib/docker/containers/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b-json.log",
	        "Name": "/old-k8s-version-051699",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-051699:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-051699",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b",
	                "LowerDir": "/var/lib/docker/overlay2/1c423ef5860499aa61df7154bd586dc48068ab3f3f5a53705d2a5cd6e312a520-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c423ef5860499aa61df7154bd586dc48068ab3f3f5a53705d2a5cd6e312a520/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c423ef5860499aa61df7154bd586dc48068ab3f3f5a53705d2a5cd6e312a520/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c423ef5860499aa61df7154bd586dc48068ab3f3f5a53705d2a5cd6e312a520/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-051699",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-051699/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-051699",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-051699",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-051699",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "94116e67eb08bec722555a345354f9365bba84c994aed92458a5141ba1302061",
	            "SandboxKey": "/var/run/docker/netns/94116e67eb08",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-051699": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:9b:7f:dd:ff:6c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a6116ab229e22ee69821bd47d3a0f489af279d0545fa10007411817efdd59740",
	                    "EndpointID": "5ee19d2a4e7022f605d19107b439ec37c82f4ff9c7bd08ed38c0bb4549d4233c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-051699",
	                        "5e184c16699d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-051699 -n old-k8s-version-051699
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-051699 -n old-k8s-version-051699: exit status 2 (358.38957ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-051699 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-051699 logs -n 25: (1.382068954s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-062409 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo containerd config dump                                                                                                                                                                                                  │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo crio config                                                                                                                                                                                                             │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ delete  │ -p cilium-062409                                                                                                                                                                                                                              │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │ 13 Dec 25 11:45 UTC │
	│ start   │ -p force-systemd-env-181508 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-181508  │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │ 13 Dec 25 11:46 UTC │
	│ delete  │ -p kubernetes-upgrade-854588                                                                                                                                                                                                                  │ kubernetes-upgrade-854588 │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ start   │ -p cert-expiration-420007 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-420007    │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ delete  │ -p force-systemd-env-181508                                                                                                                                                                                                                   │ force-systemd-env-181508  │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ start   │ -p cert-options-522461 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-522461       │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ ssh     │ cert-options-522461 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-522461       │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:47 UTC │
	│ ssh     │ -p cert-options-522461 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-522461       │ jenkins │ v1.37.0 │ 13 Dec 25 11:47 UTC │ 13 Dec 25 11:47 UTC │
	│ delete  │ -p cert-options-522461                                                                                                                                                                                                                        │ cert-options-522461       │ jenkins │ v1.37.0 │ 13 Dec 25 11:47 UTC │ 13 Dec 25 11:47 UTC │
	│ start   │ -p old-k8s-version-051699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-051699    │ jenkins │ v1.37.0 │ 13 Dec 25 11:47 UTC │ 13 Dec 25 11:48 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-051699 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-051699    │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │                     │
	│ stop    │ -p old-k8s-version-051699 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-051699    │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │ 13 Dec 25 11:48 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-051699 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-051699    │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │ 13 Dec 25 11:48 UTC │
	│ start   │ -p old-k8s-version-051699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-051699    │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │ 13 Dec 25 11:49 UTC │
	│ image   │ old-k8s-version-051699 image list --format=json                                                                                                                                                                                               │ old-k8s-version-051699    │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ pause   │ -p old-k8s-version-051699 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-051699    │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:48:29
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:48:29.629865  586444 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:48:29.630066  586444 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:48:29.630091  586444 out.go:374] Setting ErrFile to fd 2...
	I1213 11:48:29.630109  586444 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:48:29.630476  586444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:48:29.631082  586444 out.go:368] Setting JSON to false
	I1213 11:48:29.632261  586444 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12662,"bootTime":1765613848,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:48:29.632360  586444 start.go:143] virtualization:  
	I1213 11:48:29.635463  586444 out.go:179] * [old-k8s-version-051699] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:48:29.639358  586444 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:48:29.639433  586444 notify.go:221] Checking for updates...
	I1213 11:48:29.645470  586444 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:48:29.648391  586444 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:48:29.651310  586444 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:48:29.654210  586444 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:48:29.657146  586444 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:48:29.660532  586444 config.go:182] Loaded profile config "old-k8s-version-051699": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 11:48:29.664237  586444 out.go:179] * Kubernetes 1.34.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.2
	I1213 11:48:29.667153  586444 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:48:29.700123  586444 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:48:29.700268  586444 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:48:29.755886  586444 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:48:29.746142496 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:48:29.755992  586444 docker.go:319] overlay module found
	I1213 11:48:29.759143  586444 out.go:179] * Using the docker driver based on existing profile
	I1213 11:48:29.761991  586444 start.go:309] selected driver: docker
	I1213 11:48:29.762018  586444 start.go:927] validating driver "docker" against &{Name:old-k8s-version-051699 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-051699 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:48:29.762146  586444 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:48:29.762964  586444 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:48:29.815887  586444 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:48:29.8071295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:48:29.816223  586444 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:48:29.816255  586444 cni.go:84] Creating CNI manager for ""
	I1213 11:48:29.816312  586444 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:48:29.816349  586444 start.go:353] cluster config:
	{Name:old-k8s-version-051699 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-051699 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:48:29.819736  586444 out.go:179] * Starting "old-k8s-version-051699" primary control-plane node in "old-k8s-version-051699" cluster
	I1213 11:48:29.822546  586444 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:48:29.825446  586444 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:48:29.828230  586444 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:48:29.828304  586444 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 11:48:29.828332  586444 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1213 11:48:29.828344  586444 cache.go:65] Caching tarball of preloaded images
	I1213 11:48:29.828444  586444 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 11:48:29.828461  586444 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1213 11:48:29.828573  586444 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/config.json ...
	I1213 11:48:29.847860  586444 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:48:29.847888  586444 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:48:29.847904  586444 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:48:29.847934  586444 start.go:360] acquireMachinesLock for old-k8s-version-051699: {Name:mk7421d20807d926bcc4f5128055e7d390596771 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:48:29.847991  586444 start.go:364] duration metric: took 34.921µs to acquireMachinesLock for "old-k8s-version-051699"
	I1213 11:48:29.848013  586444 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:48:29.848020  586444 fix.go:54] fixHost starting: 
	I1213 11:48:29.848282  586444 cli_runner.go:164] Run: docker container inspect old-k8s-version-051699 --format={{.State.Status}}
	I1213 11:48:29.866006  586444 fix.go:112] recreateIfNeeded on old-k8s-version-051699: state=Stopped err=<nil>
	W1213 11:48:29.866034  586444 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:48:29.869285  586444 out.go:252] * Restarting existing docker container for "old-k8s-version-051699" ...
	I1213 11:48:29.869369  586444 cli_runner.go:164] Run: docker start old-k8s-version-051699
	I1213 11:48:30.161387  586444 cli_runner.go:164] Run: docker container inspect old-k8s-version-051699 --format={{.State.Status}}
	I1213 11:48:30.185705  586444 kic.go:430] container "old-k8s-version-051699" state is running.
	I1213 11:48:30.186123  586444 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-051699
	I1213 11:48:30.213502  586444 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/config.json ...
	I1213 11:48:30.213748  586444 machine.go:94] provisionDockerMachine start ...
	I1213 11:48:30.213815  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:30.236189  586444 main.go:143] libmachine: Using SSH client type: native
	I1213 11:48:30.236710  586444 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1213 11:48:30.236726  586444 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:48:30.237940  586444 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44646->127.0.0.1:33433: read: connection reset by peer
	I1213 11:48:33.399081  586444 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-051699
	
	I1213 11:48:33.399110  586444 ubuntu.go:182] provisioning hostname "old-k8s-version-051699"
	I1213 11:48:33.399179  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:33.415973  586444 main.go:143] libmachine: Using SSH client type: native
	I1213 11:48:33.416297  586444 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1213 11:48:33.416316  586444 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-051699 && echo "old-k8s-version-051699" | sudo tee /etc/hostname
	I1213 11:48:33.577992  586444 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-051699
	
	I1213 11:48:33.578070  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:33.597813  586444 main.go:143] libmachine: Using SSH client type: native
	I1213 11:48:33.598127  586444 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1213 11:48:33.598154  586444 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-051699' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-051699/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-051699' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:48:33.747810  586444 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:48:33.747836  586444 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 11:48:33.747874  586444 ubuntu.go:190] setting up certificates
	I1213 11:48:33.747885  586444 provision.go:84] configureAuth start
	I1213 11:48:33.747963  586444 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-051699
	I1213 11:48:33.765303  586444 provision.go:143] copyHostCerts
	I1213 11:48:33.765459  586444 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 11:48:33.765474  586444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 11:48:33.765559  586444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 11:48:33.765676  586444 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 11:48:33.765688  586444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 11:48:33.765718  586444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 11:48:33.765787  586444 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 11:48:33.765802  586444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 11:48:33.765831  586444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 11:48:33.765897  586444 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-051699 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-051699]
	I1213 11:48:34.026605  586444 provision.go:177] copyRemoteCerts
	I1213 11:48:34.026676  586444 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:48:34.026719  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:34.046409  586444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:48:34.151787  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1213 11:48:34.172155  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:48:34.191142  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:48:34.209958  586444 provision.go:87] duration metric: took 462.038592ms to configureAuth
	I1213 11:48:34.210028  586444 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:48:34.210247  586444 config.go:182] Loaded profile config "old-k8s-version-051699": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 11:48:34.210358  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:34.227912  586444 main.go:143] libmachine: Using SSH client type: native
	I1213 11:48:34.228231  586444 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1213 11:48:34.228252  586444 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:48:34.587129  586444 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:48:34.587157  586444 machine.go:97] duration metric: took 4.37339504s to provisionDockerMachine
	I1213 11:48:34.587169  586444 start.go:293] postStartSetup for "old-k8s-version-051699" (driver="docker")
	I1213 11:48:34.587180  586444 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:48:34.587243  586444 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:48:34.587297  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:34.608419  586444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:48:34.719234  586444 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:48:34.722472  586444 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:48:34.722499  586444 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:48:34.722510  586444 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 11:48:34.722564  586444 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 11:48:34.722669  586444 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 11:48:34.722770  586444 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:48:34.730206  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:48:34.748099  586444 start.go:296] duration metric: took 160.913882ms for postStartSetup
	I1213 11:48:34.748186  586444 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:48:34.748229  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:34.766043  586444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:48:34.868538  586444 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:48:34.873380  586444 fix.go:56] duration metric: took 5.02535291s for fixHost
	I1213 11:48:34.873409  586444 start.go:83] releasing machines lock for "old-k8s-version-051699", held for 5.025404659s
	I1213 11:48:34.873487  586444 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-051699
	I1213 11:48:34.890286  586444 ssh_runner.go:195] Run: cat /version.json
	I1213 11:48:34.890342  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:34.890377  586444 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:48:34.890431  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:34.908850  586444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:48:34.913404  586444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:48:35.125225  586444 ssh_runner.go:195] Run: systemctl --version
	I1213 11:48:35.132212  586444 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:48:35.170610  586444 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:48:35.174969  586444 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:48:35.175042  586444 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:48:35.185275  586444 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 11:48:35.185299  586444 start.go:496] detecting cgroup driver to use...
	I1213 11:48:35.185330  586444 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:48:35.185386  586444 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:48:35.201031  586444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:48:35.214246  586444 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:48:35.214337  586444 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:48:35.230410  586444 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:48:35.243860  586444 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:48:35.366682  586444 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:48:35.485573  586444 docker.go:234] disabling docker service ...
	I1213 11:48:35.485666  586444 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:48:35.501797  586444 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:48:35.514746  586444 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:48:35.625857  586444 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:48:35.749772  586444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:48:35.763504  586444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:48:35.778059  586444 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1213 11:48:35.778155  586444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:48:35.787787  586444 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:48:35.787887  586444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:48:35.797243  586444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:48:35.805994  586444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:48:35.814821  586444 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:48:35.823097  586444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:48:35.832828  586444 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:48:35.841573  586444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:48:35.850699  586444 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:48:35.858577  586444 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:48:35.866259  586444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:48:35.977547  586444 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:48:36.162980  586444 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:48:36.163048  586444 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:48:36.166890  586444 start.go:564] Will wait 60s for crictl version
	I1213 11:48:36.166997  586444 ssh_runner.go:195] Run: which crictl
	I1213 11:48:36.170667  586444 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:48:36.196304  586444 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:48:36.196476  586444 ssh_runner.go:195] Run: crio --version
	I1213 11:48:36.225823  586444 ssh_runner.go:195] Run: crio --version
	I1213 11:48:36.257937  586444 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1213 11:48:36.260682  586444 cli_runner.go:164] Run: docker network inspect old-k8s-version-051699 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:48:36.280585  586444 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 11:48:36.284297  586444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:48:36.293931  586444 kubeadm.go:884] updating cluster {Name:old-k8s-version-051699 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-051699 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:48:36.294044  586444 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 11:48:36.294105  586444 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:48:36.330621  586444 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:48:36.330648  586444 crio.go:433] Images already preloaded, skipping extraction
	I1213 11:48:36.330707  586444 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:48:36.363811  586444 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:48:36.363835  586444 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:48:36.363843  586444 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1213 11:48:36.363944  586444 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-051699 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-051699 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:48:36.364030  586444 ssh_runner.go:195] Run: crio config
	I1213 11:48:36.425952  586444 cni.go:84] Creating CNI manager for ""
	I1213 11:48:36.426043  586444 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:48:36.426088  586444 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:48:36.426143  586444 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-051699 NodeName:old-k8s-version-051699 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:48:36.426387  586444 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-051699"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:48:36.426516  586444 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1213 11:48:36.437425  586444 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:48:36.437564  586444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:48:36.445523  586444 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1213 11:48:36.457903  586444 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:48:36.472609  586444 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1213 11:48:36.486219  586444 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:48:36.489696  586444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:48:36.499336  586444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:48:36.614185  586444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:48:36.630803  586444 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699 for IP: 192.168.85.2
	I1213 11:48:36.630868  586444 certs.go:195] generating shared ca certs ...
	I1213 11:48:36.630901  586444 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:48:36.631072  586444 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:48:36.631149  586444 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:48:36.631174  586444 certs.go:257] generating profile certs ...
	I1213 11:48:36.631285  586444 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.key
	I1213 11:48:36.631389  586444 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/apiserver.key.8b85897d
	I1213 11:48:36.631462  586444 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/proxy-client.key
	I1213 11:48:36.631645  586444 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:48:36.631714  586444 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:48:36.631751  586444 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:48:36.631812  586444 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:48:36.631865  586444 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:48:36.631913  586444 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:48:36.631991  586444 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:48:36.632653  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:48:36.660221  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:48:36.680701  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:48:36.702748  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:48:36.728873  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 11:48:36.750963  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:48:36.778938  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:48:36.803214  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:48:36.822651  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:48:36.850990  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:48:36.871910  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:48:36.891332  586444 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:48:36.909502  586444 ssh_runner.go:195] Run: openssl version
	I1213 11:48:36.917904  586444 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:48:36.927820  586444 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:48:36.936172  586444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:48:36.939917  586444 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:48:36.940008  586444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:48:36.983151  586444 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:48:36.990576  586444 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:48:36.998062  586444 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:48:37.008000  586444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:48:37.012978  586444 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:48:37.013050  586444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:48:37.055827  586444 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:48:37.063485  586444 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:48:37.070840  586444 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:48:37.078769  586444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:48:37.082798  586444 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:48:37.082861  586444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:48:37.129494  586444 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:48:37.137991  586444 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:48:37.141614  586444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:48:37.183155  586444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:48:37.227339  586444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:48:37.278156  586444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:48:37.348466  586444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:48:37.445982  586444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:48:37.512732  586444 kubeadm.go:401] StartCluster: {Name:old-k8s-version-051699 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-051699 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:48:37.512840  586444 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:48:37.512922  586444 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:48:37.568567  586444 cri.go:89] found id: "d1083a84171e8b109d5205f26f918e35b5462caf78b728353423ce03b323617e"
	I1213 11:48:37.568608  586444 cri.go:89] found id: "938d6fe9735fccb0295851287a2c46be9275edddee6d4e17fa2757fee05fb949"
	I1213 11:48:37.568614  586444 cri.go:89] found id: "c6727afa741cfa7ae0dee9ad26ea0a874d1c2fc01c26783d2fd633cfe3f8989c"
	I1213 11:48:37.568618  586444 cri.go:89] found id: "f0ad5667d7443959d98e2dd8e90cf8d0216a7d83e917e88c8d3bcb7016cb1041"
	I1213 11:48:37.568630  586444 cri.go:89] found id: ""
	I1213 11:48:37.568710  586444 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 11:48:37.589176  586444 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:48:37Z" level=error msg="open /run/runc: no such file or directory"
	I1213 11:48:37.589282  586444 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:48:37.603224  586444 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 11:48:37.603293  586444 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 11:48:37.603364  586444 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:48:37.614095  586444 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:48:37.614759  586444 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-051699" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:48:37.615079  586444 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-354468/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-051699" cluster setting kubeconfig missing "old-k8s-version-051699" context setting]
	I1213 11:48:37.615605  586444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:48:37.617153  586444 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:48:37.632138  586444 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 11:48:37.632211  586444 kubeadm.go:602] duration metric: took 28.897644ms to restartPrimaryControlPlane
	I1213 11:48:37.632237  586444 kubeadm.go:403] duration metric: took 119.525865ms to StartCluster
	I1213 11:48:37.632269  586444 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:48:37.632361  586444 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:48:37.633271  586444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:48:37.633522  586444 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:48:37.633920  586444 config.go:182] Loaded profile config "old-k8s-version-051699": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 11:48:37.633987  586444 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:48:37.634055  586444 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-051699"
	I1213 11:48:37.634064  586444 addons.go:70] Setting dashboard=true in profile "old-k8s-version-051699"
	I1213 11:48:37.634087  586444 addons.go:239] Setting addon dashboard=true in "old-k8s-version-051699"
	W1213 11:48:37.634095  586444 addons.go:248] addon dashboard should already be in state true
	I1213 11:48:37.634137  586444 host.go:66] Checking if "old-k8s-version-051699" exists ...
	I1213 11:48:37.634069  586444 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-051699"
	W1213 11:48:37.634190  586444 addons.go:248] addon storage-provisioner should already be in state true
	I1213 11:48:37.634212  586444 host.go:66] Checking if "old-k8s-version-051699" exists ...
	I1213 11:48:37.634636  586444 cli_runner.go:164] Run: docker container inspect old-k8s-version-051699 --format={{.State.Status}}
	I1213 11:48:37.634730  586444 cli_runner.go:164] Run: docker container inspect old-k8s-version-051699 --format={{.State.Status}}
	I1213 11:48:37.634073  586444 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-051699"
	I1213 11:48:37.635100  586444 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-051699"
	I1213 11:48:37.635356  586444 cli_runner.go:164] Run: docker container inspect old-k8s-version-051699 --format={{.State.Status}}
	I1213 11:48:37.639329  586444 out.go:179] * Verifying Kubernetes components...
	I1213 11:48:37.642545  586444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:48:37.694267  586444 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-051699"
	W1213 11:48:37.694295  586444 addons.go:248] addon default-storageclass should already be in state true
	I1213 11:48:37.694320  586444 host.go:66] Checking if "old-k8s-version-051699" exists ...
	I1213 11:48:37.694768  586444 cli_runner.go:164] Run: docker container inspect old-k8s-version-051699 --format={{.State.Status}}
	I1213 11:48:37.699988  586444 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:48:37.705313  586444 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:48:37.705337  586444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 11:48:37.705413  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:37.713705  586444 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 11:48:37.718193  586444 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 11:48:37.723617  586444 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 11:48:37.723649  586444 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 11:48:37.723723  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:37.743673  586444 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 11:48:37.743697  586444 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 11:48:37.743763  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:37.746545  586444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:48:37.783636  586444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:48:37.791090  586444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:48:37.985280  586444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:48:37.997317  586444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:48:38.036496  586444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:48:38.187768  586444 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 11:48:38.187840  586444 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 11:48:38.290108  586444 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 11:48:38.290186  586444 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 11:48:38.321444  586444 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 11:48:38.321516  586444 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 11:48:38.344186  586444 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 11:48:38.344258  586444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 11:48:38.368085  586444 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 11:48:38.368163  586444 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 11:48:38.384480  586444 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 11:48:38.384501  586444 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 11:48:38.407051  586444 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 11:48:38.407073  586444 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 11:48:38.435160  586444 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 11:48:38.435183  586444 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 11:48:38.459495  586444 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:48:38.459577  586444 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 11:48:38.489900  586444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:48:43.843978  586444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.858664833s)
	I1213 11:48:43.844033  586444 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.846644812s)
	I1213 11:48:43.844054  586444 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-051699" to be "Ready" ...
	I1213 11:48:43.844378  586444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.807802451s)
	I1213 11:48:43.895367  586444 node_ready.go:49] node "old-k8s-version-051699" is "Ready"
	I1213 11:48:43.895398  586444 node_ready.go:38] duration metric: took 51.327355ms for node "old-k8s-version-051699" to be "Ready" ...
	I1213 11:48:43.895412  586444 api_server.go:52] waiting for apiserver process to appear ...
	I1213 11:48:43.895477  586444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:48:44.452396  586444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.962404227s)
	I1213 11:48:44.452505  586444 api_server.go:72] duration metric: took 6.818924941s to wait for apiserver process to appear ...
	I1213 11:48:44.452654  586444 api_server.go:88] waiting for apiserver healthz status ...
	I1213 11:48:44.452682  586444 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 11:48:44.455868  586444 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-051699 addons enable metrics-server
	
	I1213 11:48:44.458888  586444 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1213 11:48:44.461928  586444 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1213 11:48:44.462148  586444 addons.go:530] duration metric: took 6.8281589s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1213 11:48:44.463479  586444 api_server.go:141] control plane version: v1.28.0
	I1213 11:48:44.463505  586444 api_server.go:131] duration metric: took 10.842242ms to wait for apiserver health ...
	I1213 11:48:44.463562  586444 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 11:48:44.473649  586444 system_pods.go:59] 8 kube-system pods found
	I1213 11:48:44.473694  586444 system_pods.go:61] "coredns-5dd5756b68-w2hls" [ae27a521-38ba-4d9d-8b84-6dfc46e48388] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:48:44.473703  586444 system_pods.go:61] "etcd-old-k8s-version-051699" [09a82a43-a427-4010-818e-7d87644712a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 11:48:44.473708  586444 system_pods.go:61] "kindnet-n4ht9" [9abd4e51-c2b6-44ea-8b75-ca7f080370fa] Running
	I1213 11:48:44.473715  586444 system_pods.go:61] "kube-apiserver-old-k8s-version-051699" [e9cac38f-f145-4903-9006-fef05e00da67] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 11:48:44.473722  586444 system_pods.go:61] "kube-controller-manager-old-k8s-version-051699" [fc3d242a-c95b-41f0-a002-ba89275429a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 11:48:44.473727  586444 system_pods.go:61] "kube-proxy-qmcm4" [8ab5345a-ad4d-4d16-9728-12b05b662fc6] Running
	I1213 11:48:44.473732  586444 system_pods.go:61] "kube-scheduler-old-k8s-version-051699" [81301076-b694-4be4-a192-41d334af72ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 11:48:44.473736  586444 system_pods.go:61] "storage-provisioner" [0fb2212c-6b12-43c4-8d5a-575f27bea92e] Running
	I1213 11:48:44.473748  586444 system_pods.go:74] duration metric: took 10.180247ms to wait for pod list to return data ...
	I1213 11:48:44.473756  586444 default_sa.go:34] waiting for default service account to be created ...
	I1213 11:48:44.477278  586444 default_sa.go:45] found service account: "default"
	I1213 11:48:44.477304  586444 default_sa.go:55] duration metric: took 3.541957ms for default service account to be created ...
	I1213 11:48:44.477323  586444 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 11:48:44.480930  586444 system_pods.go:86] 8 kube-system pods found
	I1213 11:48:44.480968  586444 system_pods.go:89] "coredns-5dd5756b68-w2hls" [ae27a521-38ba-4d9d-8b84-6dfc46e48388] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:48:44.480979  586444 system_pods.go:89] "etcd-old-k8s-version-051699" [09a82a43-a427-4010-818e-7d87644712a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 11:48:44.480986  586444 system_pods.go:89] "kindnet-n4ht9" [9abd4e51-c2b6-44ea-8b75-ca7f080370fa] Running
	I1213 11:48:44.480993  586444 system_pods.go:89] "kube-apiserver-old-k8s-version-051699" [e9cac38f-f145-4903-9006-fef05e00da67] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 11:48:44.481002  586444 system_pods.go:89] "kube-controller-manager-old-k8s-version-051699" [fc3d242a-c95b-41f0-a002-ba89275429a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 11:48:44.481007  586444 system_pods.go:89] "kube-proxy-qmcm4" [8ab5345a-ad4d-4d16-9728-12b05b662fc6] Running
	I1213 11:48:44.481022  586444 system_pods.go:89] "kube-scheduler-old-k8s-version-051699" [81301076-b694-4be4-a192-41d334af72ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 11:48:44.481028  586444 system_pods.go:89] "storage-provisioner" [0fb2212c-6b12-43c4-8d5a-575f27bea92e] Running
	I1213 11:48:44.481043  586444 system_pods.go:126] duration metric: took 3.7051ms to wait for k8s-apps to be running ...
	I1213 11:48:44.481056  586444 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 11:48:44.481122  586444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:48:44.509090  586444 system_svc.go:56] duration metric: took 28.023226ms WaitForService to wait for kubelet
	I1213 11:48:44.509124  586444 kubeadm.go:587] duration metric: took 6.875542983s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:48:44.509152  586444 node_conditions.go:102] verifying NodePressure condition ...
	I1213 11:48:44.514046  586444 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 11:48:44.514090  586444 node_conditions.go:123] node cpu capacity is 2
	I1213 11:48:44.514103  586444 node_conditions.go:105] duration metric: took 4.946159ms to run NodePressure ...
	I1213 11:48:44.514117  586444 start.go:242] waiting for startup goroutines ...
	I1213 11:48:44.514125  586444 start.go:247] waiting for cluster config update ...
	I1213 11:48:44.514137  586444 start.go:256] writing updated cluster config ...
	I1213 11:48:44.514467  586444 ssh_runner.go:195] Run: rm -f paused
	I1213 11:48:44.518126  586444 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 11:48:44.524237  586444 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-w2hls" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 11:48:46.530267  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:48:49.029833  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:48:51.032229  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:48:53.530290  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:48:56.031461  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:48:58.031779  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:49:00.072532  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:49:02.530763  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:49:04.536634  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:49:07.031832  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:49:09.529825  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:49:11.530232  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:49:13.530696  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:49:15.531282  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:49:18.031082  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:49:20.530507  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	I1213 11:49:22.529810  586444 pod_ready.go:94] pod "coredns-5dd5756b68-w2hls" is "Ready"
	I1213 11:49:22.529839  586444 pod_ready.go:86] duration metric: took 38.00552659s for pod "coredns-5dd5756b68-w2hls" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:22.532776  586444 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:22.537887  586444 pod_ready.go:94] pod "etcd-old-k8s-version-051699" is "Ready"
	I1213 11:49:22.537972  586444 pod_ready.go:86] duration metric: took 5.166453ms for pod "etcd-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:22.541072  586444 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:22.545988  586444 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-051699" is "Ready"
	I1213 11:49:22.546017  586444 pod_ready.go:86] duration metric: took 4.91549ms for pod "kube-apiserver-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:22.548982  586444 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:22.729088  586444 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-051699" is "Ready"
	I1213 11:49:22.729162  586444 pod_ready.go:86] duration metric: took 180.153398ms for pod "kube-controller-manager-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:22.928796  586444 pod_ready.go:83] waiting for pod "kube-proxy-qmcm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:23.328217  586444 pod_ready.go:94] pod "kube-proxy-qmcm4" is "Ready"
	I1213 11:49:23.328240  586444 pod_ready.go:86] duration metric: took 399.41932ms for pod "kube-proxy-qmcm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:23.529518  586444 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:23.928175  586444 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-051699" is "Ready"
	I1213 11:49:23.928199  586444 pod_ready.go:86] duration metric: took 398.64942ms for pod "kube-scheduler-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:23.928214  586444 pod_ready.go:40] duration metric: took 39.410015836s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 11:49:23.987401  586444 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1213 11:49:23.990818  586444 out.go:203] 
	W1213 11:49:23.993667  586444 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1213 11:49:23.996636  586444 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1213 11:49:24.002017  586444 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-051699" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 11:49:16 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:16.758757941Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:49:16 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:16.765077469Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:49:16 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:16.768987232Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:49:16 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:16.788753345Z" level=info msg="Created container db85d9cc53646d47a901958e38428f5fda6f7f752e06768df8eca15233ccc90f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-flst4/dashboard-metrics-scraper" id=f2828112-31a9-417a-8823-32aec0a4241b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:49:16 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:16.789890617Z" level=info msg="Starting container: db85d9cc53646d47a901958e38428f5fda6f7f752e06768df8eca15233ccc90f" id=5bea4732-5bd9-4f28-b16f-f57346aa98ee name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:49:16 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:16.792984908Z" level=info msg="Started container" PID=1642 containerID=db85d9cc53646d47a901958e38428f5fda6f7f752e06768df8eca15233ccc90f description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-flst4/dashboard-metrics-scraper id=5bea4732-5bd9-4f28-b16f-f57346aa98ee name=/runtime.v1.RuntimeService/StartContainer sandboxID=0523e04ddfb60e69900b92a7df6d48b7b63c7c9bcab9cd1e01d37be4402abd3f
	Dec 13 11:49:16 old-k8s-version-051699 conmon[1640]: conmon db85d9cc53646d47a901 <ninfo>: container 1642 exited with status 1
	Dec 13 11:49:17 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:17.038205601Z" level=info msg="Removing container: 3fff06b673f80a386b2d9eaf249010344605196792204fa5dedef44ab12caf9d" id=4135a60c-0591-43e2-9865-8d1634a1ba77 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 11:49:17 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:17.050169165Z" level=info msg="Error loading conmon cgroup of container 3fff06b673f80a386b2d9eaf249010344605196792204fa5dedef44ab12caf9d: cgroup deleted" id=4135a60c-0591-43e2-9865-8d1634a1ba77 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 11:49:17 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:17.054851889Z" level=info msg="Removed container 3fff06b673f80a386b2d9eaf249010344605196792204fa5dedef44ab12caf9d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-flst4/dashboard-metrics-scraper" id=4135a60c-0591-43e2-9865-8d1634a1ba77 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.721755992Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.728098298Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.728129962Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.728165564Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.732239045Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.732271259Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.732292502Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.735609211Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.735642762Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.735663849Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.73870049Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.738729348Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.738752257Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.742496866Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.742534085Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	db85d9cc53646       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   0523e04ddfb60       dashboard-metrics-scraper-5f989dc9cf-flst4       kubernetes-dashboard
	b59f614a2a32a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   2ad816c2b65ba       storage-provisioner                              kube-system
	ea0421efb7eb5       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago       Running             kubernetes-dashboard        0                   4691975b5bdae       kubernetes-dashboard-8694d4445c-jpkrw            kubernetes-dashboard
	13641f9582975       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           56 seconds ago       Running             coredns                     1                   7e01272b7c4b6       coredns-5dd5756b68-w2hls                         kube-system
	c3040e86daf5a       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   c3ed6346f05f8       busybox                                          default
	252ea1238e50c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   2ad816c2b65ba       storage-provisioner                              kube-system
	c2561ac9d9b2c       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           56 seconds ago       Running             kube-proxy                  1                   b49178099cf22       kube-proxy-qmcm4                                 kube-system
	2e30c95faed49       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           56 seconds ago       Running             kindnet-cni                 1                   c767bdae2eb84       kindnet-n4ht9                                    kube-system
	d1083a84171e8       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   a551fb9f1b47e       kube-controller-manager-old-k8s-version-051699   kube-system
	938d6fe9735fc       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   4d6cfe1d75f7d       kube-scheduler-old-k8s-version-051699            kube-system
	c6727afa741cf       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   aa4572210b215       kube-apiserver-old-k8s-version-051699            kube-system
	f0ad5667d7443       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   d32afdb6eb560       etcd-old-k8s-version-051699                      kube-system
	
	
	==> coredns [13641f9582975492cba357f81a47d368b97b186827c5e55ee5221857ac9af3cb] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35276 - 22570 "HINFO IN 4103351345680759076.1825675422965302549. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018285412s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-051699
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-051699
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
	                    minikube.k8s.io/name=old-k8s-version-051699
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T11_47_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 11:47:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-051699
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 11:49:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 11:49:12 +0000   Sat, 13 Dec 2025 11:47:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 11:49:12 +0000   Sat, 13 Dec 2025 11:47:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 11:49:12 +0000   Sat, 13 Dec 2025 11:47:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 11:49:12 +0000   Sat, 13 Dec 2025 11:48:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-051699
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                1c639966-deba-4cb5-95e6-2e08822bad87
	  Boot ID:                    9bd24839-35d9-4392-a0e0-b2e0b9823eaa
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-w2hls                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-old-k8s-version-051699                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m5s
	  kube-system                 kindnet-n4ht9                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-051699             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-051699    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-qmcm4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-051699             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-flst4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-jpkrw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 109s                   kube-proxy       
	  Normal  Starting                 54s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-051699 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-051699 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-051699 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m3s                   kubelet          Node old-k8s-version-051699 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m3s                   kubelet          Node old-k8s-version-051699 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m3s                   kubelet          Node old-k8s-version-051699 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           111s                   node-controller  Node old-k8s-version-051699 event: Registered Node old-k8s-version-051699 in Controller
	  Normal  NodeReady                96s                    kubelet          Node old-k8s-version-051699 status is now: NodeReady
	  Normal  Starting                 62s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node old-k8s-version-051699 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node old-k8s-version-051699 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node old-k8s-version-051699 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                    node-controller  Node old-k8s-version-051699 event: Registered Node old-k8s-version-051699 in Controller
	
	
	==> dmesg <==
	[ +27.964028] overlayfs: idmapped layers are currently not supported
	[Dec13 11:16] overlayfs: idmapped layers are currently not supported
	[Dec13 11:20] overlayfs: idmapped layers are currently not supported
	[ +35.182226] overlayfs: idmapped layers are currently not supported
	[Dec13 11:21] overlayfs: idmapped layers are currently not supported
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	[Dec13 11:46] overlayfs: idmapped layers are currently not supported
	[ +24.639766] overlayfs: idmapped layers are currently not supported
	[ +18.732422] overlayfs: idmapped layers are currently not supported
	[Dec13 11:47] overlayfs: idmapped layers are currently not supported
	[Dec13 11:48] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f0ad5667d7443959d98e2dd8e90cf8d0216a7d83e917e88c8d3bcb7016cb1041] <==
	{"level":"info","ts":"2025-12-13T11:48:37.614457Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-13T11:48:37.614668Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-12-13T11:48:37.615694Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-13T11:48:37.617505Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-13T11:48:37.617846Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-13T11:48:37.618919Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-13T11:48:37.626229Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-13T11:48:37.625253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-13T11:48:37.62656Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-13T11:48:37.627293Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-13T11:48:37.627371Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-13T11:48:38.851041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-13T11:48:38.85115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-13T11:48:38.851199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-13T11:48:38.851237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-13T11:48:38.851265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-13T11:48:38.851298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-13T11:48:38.851329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-13T11:48:38.859808Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-051699 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-13T11:48:38.86005Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-13T11:48:38.860257Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-13T11:48:38.889124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-13T11:48:38.889635Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-13T11:48:38.891207Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-13T11:48:38.891273Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:49:39 up  3:32,  0 user,  load average: 1.32, 2.11, 2.02
	Linux old-k8s-version-051699 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2e30c95faed4944877f25b1255472ca14b20fc12fbc0176060a435a46b1d39b3] <==
	I1213 11:48:42.524644       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 11:48:42.525234       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 11:48:42.525406       1 main.go:148] setting mtu 1500 for CNI 
	I1213 11:48:42.525418       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 11:48:42.525431       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T11:48:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 11:48:42.722175       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 11:48:42.724139       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 11:48:42.724234       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 11:48:42.724406       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1213 11:49:12.723218       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1213 11:49:12.724326       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1213 11:49:12.724362       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1213 11:49:12.724405       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1213 11:49:14.225418       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 11:49:14.225451       1 metrics.go:72] Registering metrics
	I1213 11:49:14.225524       1 controller.go:711] "Syncing nftables rules"
	I1213 11:49:22.721435       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:49:22.721517       1 main.go:301] handling current node
	I1213 11:49:32.726440       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:49:32.726476       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c6727afa741cfa7ae0dee9ad26ea0a874d1c2fc01c26783d2fd633cfe3f8989c] <==
	I1213 11:48:41.663152       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1213 11:48:41.684838       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1213 11:48:41.687920       1 shared_informer.go:318] Caches are synced for configmaps
	I1213 11:48:41.711707       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1213 11:48:41.711786       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1213 11:48:41.711819       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1213 11:48:41.712220       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1213 11:48:41.714211       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 11:48:41.736301       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1213 11:48:41.736484       1 aggregator.go:166] initial CRD sync complete...
	I1213 11:48:41.736527       1 autoregister_controller.go:141] Starting autoregister controller
	I1213 11:48:41.736563       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 11:48:41.736612       1 cache.go:39] Caches are synced for autoregister controller
	E1213 11:48:41.937690       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1213 11:48:42.337195       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 11:48:44.238784       1 controller.go:624] quota admission added evaluator for: namespaces
	I1213 11:48:44.282223       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1213 11:48:44.313038       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 11:48:44.337058       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 11:48:44.352856       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1213 11:48:44.421645       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.207.228"}
	I1213 11:48:44.443660       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.197.121"}
	I1213 11:48:54.949728       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1213 11:48:55.338208       1 controller.go:624] quota admission added evaluator for: endpoints
	I1213 11:48:55.369019       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d1083a84171e8b109d5205f26f918e35b5462caf78b728353423ce03b323617e] <==
	I1213 11:48:55.220293       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="527.507649ms"
	I1213 11:48:55.220455       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="43.151µs"
	I1213 11:48:55.224905       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-flst4"
	I1213 11:48:55.225008       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-jpkrw"
	I1213 11:48:55.263744       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="306.738352ms"
	I1213 11:48:55.264410       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="302.81642ms"
	I1213 11:48:55.286055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="21.92794ms"
	I1213 11:48:55.286195       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="79.96µs"
	I1213 11:48:55.305291       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="40.8015ms"
	I1213 11:48:55.305437       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.899µs"
	I1213 11:48:55.311688       1 shared_informer.go:318] Caches are synced for garbage collector
	I1213 11:48:55.315876       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.55µs"
	I1213 11:48:55.341605       1 shared_informer.go:318] Caches are synced for garbage collector
	I1213 11:48:55.341751       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1213 11:48:55.350088       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="77.457µs"
	I1213 11:48:55.366697       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1213 11:49:02.032803       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="24.080341ms"
	I1213 11:49:02.032895       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="44.333µs"
	I1213 11:49:06.018447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.481µs"
	I1213 11:49:07.009220       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.223µs"
	I1213 11:49:08.011580       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.607µs"
	I1213 11:49:17.053568       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.951µs"
	I1213 11:49:22.287987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.154678ms"
	I1213 11:49:22.288147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="41.132µs"
	I1213 11:49:26.769027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.548µs"
	
	
	==> kube-proxy [c2561ac9d9b2c9534552136b22e55b5102dd7123c91b9a47f4f6f0d17845ca3c] <==
	I1213 11:48:43.514628       1 server_others.go:69] "Using iptables proxy"
	I1213 11:48:43.636173       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1213 11:48:44.284415       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 11:48:44.323378       1 server_others.go:152] "Using iptables Proxier"
	I1213 11:48:44.330441       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1213 11:48:44.333165       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1213 11:48:44.339582       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1213 11:48:44.343150       1 server.go:846] "Version info" version="v1.28.0"
	I1213 11:48:44.343484       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:48:44.363009       1 config.go:188] "Starting service config controller"
	I1213 11:48:44.363129       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1213 11:48:44.363214       1 config.go:97] "Starting endpoint slice config controller"
	I1213 11:48:44.363269       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1213 11:48:44.368314       1 config.go:315] "Starting node config controller"
	I1213 11:48:44.368409       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1213 11:48:44.463406       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1213 11:48:44.463647       1 shared_informer.go:318] Caches are synced for service config
	I1213 11:48:44.469008       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [938d6fe9735fccb0295851287a2c46be9275edddee6d4e17fa2757fee05fb949] <==
	I1213 11:48:41.268655       1 serving.go:348] Generated self-signed cert in-memory
	I1213 11:48:44.839307       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1213 11:48:44.839338       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:48:44.843329       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1213 11:48:44.843424       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1213 11:48:44.843499       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:48:44.843581       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1213 11:48:44.843630       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 11:48:44.843659       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1213 11:48:44.844124       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1213 11:48:44.844226       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1213 11:48:44.944152       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1213 11:48:44.944152       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1213 11:48:44.944180       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 13 11:48:55 old-k8s-version-051699 kubelet[784]: E1213 11:48:55.266139     784 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-051699" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-051699' and this object
	Dec 13 11:48:55 old-k8s-version-051699 kubelet[784]: I1213 11:48:55.373318     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3881b100-2d7e-4826-81ca-33ce091f0e54-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-flst4\" (UID: \"3881b100-2d7e-4826-81ca-33ce091f0e54\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-flst4"
	Dec 13 11:48:55 old-k8s-version-051699 kubelet[784]: I1213 11:48:55.373514     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw28w\" (UniqueName: \"kubernetes.io/projected/76679ef8-1925-4cdb-9473-1acc7e6609c7-kube-api-access-cw28w\") pod \"kubernetes-dashboard-8694d4445c-jpkrw\" (UID: \"76679ef8-1925-4cdb-9473-1acc7e6609c7\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jpkrw"
	Dec 13 11:48:55 old-k8s-version-051699 kubelet[784]: I1213 11:48:55.373640     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zvbn\" (UniqueName: \"kubernetes.io/projected/3881b100-2d7e-4826-81ca-33ce091f0e54-kube-api-access-2zvbn\") pod \"dashboard-metrics-scraper-5f989dc9cf-flst4\" (UID: \"3881b100-2d7e-4826-81ca-33ce091f0e54\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-flst4"
	Dec 13 11:48:55 old-k8s-version-051699 kubelet[784]: I1213 11:48:55.373758     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/76679ef8-1925-4cdb-9473-1acc7e6609c7-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-jpkrw\" (UID: \"76679ef8-1925-4cdb-9473-1acc7e6609c7\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jpkrw"
	Dec 13 11:48:56 old-k8s-version-051699 kubelet[784]: W1213 11:48:56.481251     784 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b/crio-4691975b5bdaeedf277f3b34a406d72b4576d3886314a4cf011877d7656825f7 WatchSource:0}: Error finding container 4691975b5bdaeedf277f3b34a406d72b4576d3886314a4cf011877d7656825f7: Status 404 returned error can't find the container with id 4691975b5bdaeedf277f3b34a406d72b4576d3886314a4cf011877d7656825f7
	Dec 13 11:48:56 old-k8s-version-051699 kubelet[784]: W1213 11:48:56.772638     784 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b/crio-0523e04ddfb60e69900b92a7df6d48b7b63c7c9bcab9cd1e01d37be4402abd3f WatchSource:0}: Error finding container 0523e04ddfb60e69900b92a7df6d48b7b63c7c9bcab9cd1e01d37be4402abd3f: Status 404 returned error can't find the container with id 0523e04ddfb60e69900b92a7df6d48b7b63c7c9bcab9cd1e01d37be4402abd3f
	Dec 13 11:49:05 old-k8s-version-051699 kubelet[784]: I1213 11:49:05.987566     784 scope.go:117] "RemoveContainer" containerID="0fa8ef9d70e639cff07e6e1fe1dce5e70aef612fc7c15db1de207a7ba40eaf0a"
	Dec 13 11:49:06 old-k8s-version-051699 kubelet[784]: I1213 11:49:06.018092     784 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jpkrw" podStartSLOduration=6.074268937 podCreationTimestamp="2025-12-13 11:48:55 +0000 UTC" firstStartedPulling="2025-12-13 11:48:56.486140592 +0000 UTC m=+19.852544610" lastFinishedPulling="2025-12-13 11:49:01.428591996 +0000 UTC m=+24.794996014" observedRunningTime="2025-12-13 11:49:02.006975317 +0000 UTC m=+25.373379351" watchObservedRunningTime="2025-12-13 11:49:06.016720341 +0000 UTC m=+29.383124367"
	Dec 13 11:49:06 old-k8s-version-051699 kubelet[784]: I1213 11:49:06.991505     784 scope.go:117] "RemoveContainer" containerID="3fff06b673f80a386b2d9eaf249010344605196792204fa5dedef44ab12caf9d"
	Dec 13 11:49:06 old-k8s-version-051699 kubelet[784]: I1213 11:49:06.992489     784 scope.go:117] "RemoveContainer" containerID="0fa8ef9d70e639cff07e6e1fe1dce5e70aef612fc7c15db1de207a7ba40eaf0a"
	Dec 13 11:49:06 old-k8s-version-051699 kubelet[784]: E1213 11:49:06.993366     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-flst4_kubernetes-dashboard(3881b100-2d7e-4826-81ca-33ce091f0e54)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-flst4" podUID="3881b100-2d7e-4826-81ca-33ce091f0e54"
	Dec 13 11:49:07 old-k8s-version-051699 kubelet[784]: I1213 11:49:07.994738     784 scope.go:117] "RemoveContainer" containerID="3fff06b673f80a386b2d9eaf249010344605196792204fa5dedef44ab12caf9d"
	Dec 13 11:49:07 old-k8s-version-051699 kubelet[784]: E1213 11:49:07.995012     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-flst4_kubernetes-dashboard(3881b100-2d7e-4826-81ca-33ce091f0e54)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-flst4" podUID="3881b100-2d7e-4826-81ca-33ce091f0e54"
	Dec 13 11:49:14 old-k8s-version-051699 kubelet[784]: I1213 11:49:14.018292     784 scope.go:117] "RemoveContainer" containerID="252ea1238e50c637074d8e48f4c01b8d464d784a390cecebc97a291bc3d45d6c"
	Dec 13 11:49:16 old-k8s-version-051699 kubelet[784]: I1213 11:49:16.754709     784 scope.go:117] "RemoveContainer" containerID="3fff06b673f80a386b2d9eaf249010344605196792204fa5dedef44ab12caf9d"
	Dec 13 11:49:17 old-k8s-version-051699 kubelet[784]: I1213 11:49:17.030603     784 scope.go:117] "RemoveContainer" containerID="3fff06b673f80a386b2d9eaf249010344605196792204fa5dedef44ab12caf9d"
	Dec 13 11:49:17 old-k8s-version-051699 kubelet[784]: I1213 11:49:17.030853     784 scope.go:117] "RemoveContainer" containerID="db85d9cc53646d47a901958e38428f5fda6f7f752e06768df8eca15233ccc90f"
	Dec 13 11:49:17 old-k8s-version-051699 kubelet[784]: E1213 11:49:17.031121     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-flst4_kubernetes-dashboard(3881b100-2d7e-4826-81ca-33ce091f0e54)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-flst4" podUID="3881b100-2d7e-4826-81ca-33ce091f0e54"
	Dec 13 11:49:26 old-k8s-version-051699 kubelet[784]: I1213 11:49:26.755098     784 scope.go:117] "RemoveContainer" containerID="db85d9cc53646d47a901958e38428f5fda6f7f752e06768df8eca15233ccc90f"
	Dec 13 11:49:26 old-k8s-version-051699 kubelet[784]: E1213 11:49:26.755885     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-flst4_kubernetes-dashboard(3881b100-2d7e-4826-81ca-33ce091f0e54)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-flst4" podUID="3881b100-2d7e-4826-81ca-33ce091f0e54"
	Dec 13 11:49:36 old-k8s-version-051699 kubelet[784]: I1213 11:49:36.295339     784 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 13 11:49:36 old-k8s-version-051699 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 11:49:36 old-k8s-version-051699 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 11:49:36 old-k8s-version-051699 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [ea0421efb7eb56b9e66dde1a483a1434ed846923a3c65b69a736c2f66a0ecb91] <==
	2025/12/13 11:49:01 Starting overwatch
	2025/12/13 11:49:01 Using namespace: kubernetes-dashboard
	2025/12/13 11:49:01 Using in-cluster config to connect to apiserver
	2025/12/13 11:49:01 Using secret token for csrf signing
	2025/12/13 11:49:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 11:49:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 11:49:01 Successful initial request to the apiserver, version: v1.28.0
	2025/12/13 11:49:01 Generating JWE encryption key
	2025/12/13 11:49:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 11:49:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 11:49:02 Initializing JWE encryption key from synchronized object
	2025/12/13 11:49:02 Creating in-cluster Sidecar client
	2025/12/13 11:49:02 Serving insecurely on HTTP port: 9090
	2025/12/13 11:49:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 11:49:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [252ea1238e50c637074d8e48f4c01b8d464d784a390cecebc97a291bc3d45d6c] <==
	I1213 11:48:43.060130       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 11:49:13.161828       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b59f614a2a32ae75a805e01d493986cd39e6a71e3aed6253427f6024f7790b2e] <==
	I1213 11:49:14.074430       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 11:49:14.093606       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 11:49:14.094731       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 11:49:31.493108       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 11:49:31.493358       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-051699_cc8eedc4-c218-4a8a-818b-b095a84d5222!
	I1213 11:49:31.493503       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c9bc71b9-ee13-4321-8101-70d105400c33", APIVersion:"v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-051699_cc8eedc4-c218-4a8a-818b-b095a84d5222 became leader
	I1213 11:49:31.594417       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-051699_cc8eedc4-c218-4a8a-818b-b095a84d5222!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-051699 -n old-k8s-version-051699
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-051699 -n old-k8s-version-051699: exit status 2 (409.167742ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-051699 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-051699
helpers_test.go:244: (dbg) docker inspect old-k8s-version-051699:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b",
	        "Created": "2025-12-13T11:47:09.22535414Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 586575,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:48:29.909769123Z",
	            "FinishedAt": "2025-12-13T11:48:29.088172023Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b/hosts",
	        "LogPath": "/var/lib/docker/containers/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b-json.log",
	        "Name": "/old-k8s-version-051699",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-051699:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-051699",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b",
	                "LowerDir": "/var/lib/docker/overlay2/1c423ef5860499aa61df7154bd586dc48068ab3f3f5a53705d2a5cd6e312a520-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c423ef5860499aa61df7154bd586dc48068ab3f3f5a53705d2a5cd6e312a520/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c423ef5860499aa61df7154bd586dc48068ab3f3f5a53705d2a5cd6e312a520/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c423ef5860499aa61df7154bd586dc48068ab3f3f5a53705d2a5cd6e312a520/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-051699",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-051699/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-051699",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-051699",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-051699",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "94116e67eb08bec722555a345354f9365bba84c994aed92458a5141ba1302061",
	            "SandboxKey": "/var/run/docker/netns/94116e67eb08",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-051699": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:9b:7f:dd:ff:6c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a6116ab229e22ee69821bd47d3a0f489af279d0545fa10007411817efdd59740",
	                    "EndpointID": "5ee19d2a4e7022f605d19107b439ec37c82f4ff9c7bd08ed38c0bb4549d4233c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-051699",
	                        "5e184c16699d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-051699 -n old-k8s-version-051699
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-051699 -n old-k8s-version-051699: exit status 2 (388.170998ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-051699 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-051699 logs -n 25: (1.338222751s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-062409 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo containerd config dump                                                                                                                                                                                                  │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo crio config                                                                                                                                                                                                             │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ delete  │ -p cilium-062409                                                                                                                                                                                                                              │ cilium-062409             │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │ 13 Dec 25 11:45 UTC │
	│ start   │ -p force-systemd-env-181508 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-181508  │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │ 13 Dec 25 11:46 UTC │
	│ delete  │ -p kubernetes-upgrade-854588                                                                                                                                                                                                                  │ kubernetes-upgrade-854588 │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ start   │ -p cert-expiration-420007 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-420007    │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ delete  │ -p force-systemd-env-181508                                                                                                                                                                                                                   │ force-systemd-env-181508  │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ start   │ -p cert-options-522461 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-522461       │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ ssh     │ cert-options-522461 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-522461       │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:47 UTC │
	│ ssh     │ -p cert-options-522461 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-522461       │ jenkins │ v1.37.0 │ 13 Dec 25 11:47 UTC │ 13 Dec 25 11:47 UTC │
	│ delete  │ -p cert-options-522461                                                                                                                                                                                                                        │ cert-options-522461       │ jenkins │ v1.37.0 │ 13 Dec 25 11:47 UTC │ 13 Dec 25 11:47 UTC │
	│ start   │ -p old-k8s-version-051699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-051699    │ jenkins │ v1.37.0 │ 13 Dec 25 11:47 UTC │ 13 Dec 25 11:48 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-051699 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-051699    │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │                     │
	│ stop    │ -p old-k8s-version-051699 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-051699    │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │ 13 Dec 25 11:48 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-051699 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-051699    │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │ 13 Dec 25 11:48 UTC │
	│ start   │ -p old-k8s-version-051699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-051699    │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │ 13 Dec 25 11:49 UTC │
	│ image   │ old-k8s-version-051699 image list --format=json                                                                                                                                                                                               │ old-k8s-version-051699    │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ pause   │ -p old-k8s-version-051699 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-051699    │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:48:29
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:48:29.629865  586444 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:48:29.630066  586444 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:48:29.630091  586444 out.go:374] Setting ErrFile to fd 2...
	I1213 11:48:29.630109  586444 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:48:29.630476  586444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:48:29.631082  586444 out.go:368] Setting JSON to false
	I1213 11:48:29.632261  586444 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12662,"bootTime":1765613848,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:48:29.632360  586444 start.go:143] virtualization:  
	I1213 11:48:29.635463  586444 out.go:179] * [old-k8s-version-051699] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:48:29.639358  586444 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:48:29.639433  586444 notify.go:221] Checking for updates...
	I1213 11:48:29.645470  586444 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:48:29.648391  586444 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:48:29.651310  586444 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:48:29.654210  586444 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:48:29.657146  586444 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:48:29.660532  586444 config.go:182] Loaded profile config "old-k8s-version-051699": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 11:48:29.664237  586444 out.go:179] * Kubernetes 1.34.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.2
	I1213 11:48:29.667153  586444 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:48:29.700123  586444 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:48:29.700268  586444 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:48:29.755886  586444 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:48:29.746142496 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:48:29.755992  586444 docker.go:319] overlay module found
	I1213 11:48:29.759143  586444 out.go:179] * Using the docker driver based on existing profile
	I1213 11:48:29.761991  586444 start.go:309] selected driver: docker
	I1213 11:48:29.762018  586444 start.go:927] validating driver "docker" against &{Name:old-k8s-version-051699 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-051699 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:48:29.762146  586444 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:48:29.762964  586444 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:48:29.815887  586444 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:48:29.8071295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:48:29.816223  586444 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:48:29.816255  586444 cni.go:84] Creating CNI manager for ""
	I1213 11:48:29.816312  586444 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:48:29.816349  586444 start.go:353] cluster config:
	{Name:old-k8s-version-051699 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-051699 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:48:29.819736  586444 out.go:179] * Starting "old-k8s-version-051699" primary control-plane node in "old-k8s-version-051699" cluster
	I1213 11:48:29.822546  586444 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:48:29.825446  586444 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:48:29.828230  586444 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:48:29.828304  586444 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 11:48:29.828332  586444 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1213 11:48:29.828344  586444 cache.go:65] Caching tarball of preloaded images
	I1213 11:48:29.828444  586444 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 11:48:29.828461  586444 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1213 11:48:29.828573  586444 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/config.json ...
	I1213 11:48:29.847860  586444 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:48:29.847888  586444 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:48:29.847904  586444 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:48:29.847934  586444 start.go:360] acquireMachinesLock for old-k8s-version-051699: {Name:mk7421d20807d926bcc4f5128055e7d390596771 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:48:29.847991  586444 start.go:364] duration metric: took 34.921µs to acquireMachinesLock for "old-k8s-version-051699"
	I1213 11:48:29.848013  586444 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:48:29.848020  586444 fix.go:54] fixHost starting: 
	I1213 11:48:29.848282  586444 cli_runner.go:164] Run: docker container inspect old-k8s-version-051699 --format={{.State.Status}}
	I1213 11:48:29.866006  586444 fix.go:112] recreateIfNeeded on old-k8s-version-051699: state=Stopped err=<nil>
	W1213 11:48:29.866034  586444 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:48:29.869285  586444 out.go:252] * Restarting existing docker container for "old-k8s-version-051699" ...
	I1213 11:48:29.869369  586444 cli_runner.go:164] Run: docker start old-k8s-version-051699
	I1213 11:48:30.161387  586444 cli_runner.go:164] Run: docker container inspect old-k8s-version-051699 --format={{.State.Status}}
	I1213 11:48:30.185705  586444 kic.go:430] container "old-k8s-version-051699" state is running.
	I1213 11:48:30.186123  586444 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-051699
	I1213 11:48:30.213502  586444 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/config.json ...
	I1213 11:48:30.213748  586444 machine.go:94] provisionDockerMachine start ...
	I1213 11:48:30.213815  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:30.236189  586444 main.go:143] libmachine: Using SSH client type: native
	I1213 11:48:30.236710  586444 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1213 11:48:30.236726  586444 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:48:30.237940  586444 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44646->127.0.0.1:33433: read: connection reset by peer
	I1213 11:48:33.399081  586444 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-051699
	
	I1213 11:48:33.399110  586444 ubuntu.go:182] provisioning hostname "old-k8s-version-051699"
	I1213 11:48:33.399179  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:33.415973  586444 main.go:143] libmachine: Using SSH client type: native
	I1213 11:48:33.416297  586444 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1213 11:48:33.416316  586444 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-051699 && echo "old-k8s-version-051699" | sudo tee /etc/hostname
	I1213 11:48:33.577992  586444 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-051699
	
	I1213 11:48:33.578070  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:33.597813  586444 main.go:143] libmachine: Using SSH client type: native
	I1213 11:48:33.598127  586444 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1213 11:48:33.598154  586444 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-051699' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-051699/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-051699' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:48:33.747810  586444 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:48:33.747836  586444 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 11:48:33.747874  586444 ubuntu.go:190] setting up certificates
	I1213 11:48:33.747885  586444 provision.go:84] configureAuth start
	I1213 11:48:33.747963  586444 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-051699
	I1213 11:48:33.765303  586444 provision.go:143] copyHostCerts
	I1213 11:48:33.765459  586444 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 11:48:33.765474  586444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 11:48:33.765559  586444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 11:48:33.765676  586444 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 11:48:33.765688  586444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 11:48:33.765718  586444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 11:48:33.765787  586444 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 11:48:33.765802  586444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 11:48:33.765831  586444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 11:48:33.765897  586444 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-051699 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-051699]
	I1213 11:48:34.026605  586444 provision.go:177] copyRemoteCerts
	I1213 11:48:34.026676  586444 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:48:34.026719  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:34.046409  586444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:48:34.151787  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1213 11:48:34.172155  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:48:34.191142  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:48:34.209958  586444 provision.go:87] duration metric: took 462.038592ms to configureAuth
	I1213 11:48:34.210028  586444 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:48:34.210247  586444 config.go:182] Loaded profile config "old-k8s-version-051699": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 11:48:34.210358  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:34.227912  586444 main.go:143] libmachine: Using SSH client type: native
	I1213 11:48:34.228231  586444 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1213 11:48:34.228252  586444 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:48:34.587129  586444 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:48:34.587157  586444 machine.go:97] duration metric: took 4.37339504s to provisionDockerMachine
	I1213 11:48:34.587169  586444 start.go:293] postStartSetup for "old-k8s-version-051699" (driver="docker")
	I1213 11:48:34.587180  586444 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:48:34.587243  586444 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:48:34.587297  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:34.608419  586444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:48:34.719234  586444 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:48:34.722472  586444 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:48:34.722499  586444 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:48:34.722510  586444 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 11:48:34.722564  586444 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 11:48:34.722669  586444 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 11:48:34.722770  586444 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:48:34.730206  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:48:34.748099  586444 start.go:296] duration metric: took 160.913882ms for postStartSetup
	I1213 11:48:34.748186  586444 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:48:34.748229  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:34.766043  586444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:48:34.868538  586444 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:48:34.873380  586444 fix.go:56] duration metric: took 5.02535291s for fixHost
	I1213 11:48:34.873409  586444 start.go:83] releasing machines lock for "old-k8s-version-051699", held for 5.025404659s
	I1213 11:48:34.873487  586444 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-051699
	I1213 11:48:34.890286  586444 ssh_runner.go:195] Run: cat /version.json
	I1213 11:48:34.890342  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:34.890377  586444 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:48:34.890431  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:34.908850  586444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:48:34.913404  586444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:48:35.125225  586444 ssh_runner.go:195] Run: systemctl --version
	I1213 11:48:35.132212  586444 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:48:35.170610  586444 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:48:35.174969  586444 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:48:35.175042  586444 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:48:35.185275  586444 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 11:48:35.185299  586444 start.go:496] detecting cgroup driver to use...
	I1213 11:48:35.185330  586444 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:48:35.185386  586444 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:48:35.201031  586444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:48:35.214246  586444 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:48:35.214337  586444 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:48:35.230410  586444 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:48:35.243860  586444 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:48:35.366682  586444 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:48:35.485573  586444 docker.go:234] disabling docker service ...
	I1213 11:48:35.485666  586444 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:48:35.501797  586444 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:48:35.514746  586444 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:48:35.625857  586444 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:48:35.749772  586444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:48:35.763504  586444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:48:35.778059  586444 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1213 11:48:35.778155  586444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:48:35.787787  586444 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:48:35.787887  586444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:48:35.797243  586444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:48:35.805994  586444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:48:35.814821  586444 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:48:35.823097  586444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:48:35.832828  586444 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:48:35.841573  586444 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:48:35.850699  586444 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:48:35.858577  586444 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:48:35.866259  586444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:48:35.977547  586444 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:48:36.162980  586444 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:48:36.163048  586444 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:48:36.166890  586444 start.go:564] Will wait 60s for crictl version
	I1213 11:48:36.166997  586444 ssh_runner.go:195] Run: which crictl
	I1213 11:48:36.170667  586444 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:48:36.196304  586444 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:48:36.196476  586444 ssh_runner.go:195] Run: crio --version
	I1213 11:48:36.225823  586444 ssh_runner.go:195] Run: crio --version
	I1213 11:48:36.257937  586444 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1213 11:48:36.260682  586444 cli_runner.go:164] Run: docker network inspect old-k8s-version-051699 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:48:36.280585  586444 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 11:48:36.284297  586444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:48:36.293931  586444 kubeadm.go:884] updating cluster {Name:old-k8s-version-051699 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-051699 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:48:36.294044  586444 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 11:48:36.294105  586444 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:48:36.330621  586444 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:48:36.330648  586444 crio.go:433] Images already preloaded, skipping extraction
	I1213 11:48:36.330707  586444 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:48:36.363811  586444 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:48:36.363835  586444 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:48:36.363843  586444 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1213 11:48:36.363944  586444 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-051699 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-051699 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:48:36.364030  586444 ssh_runner.go:195] Run: crio config
	I1213 11:48:36.425952  586444 cni.go:84] Creating CNI manager for ""
	I1213 11:48:36.426043  586444 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:48:36.426088  586444 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:48:36.426143  586444 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-051699 NodeName:old-k8s-version-051699 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:48:36.426387  586444 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-051699"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:48:36.426516  586444 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1213 11:48:36.437425  586444 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:48:36.437564  586444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:48:36.445523  586444 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1213 11:48:36.457903  586444 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:48:36.472609  586444 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1213 11:48:36.486219  586444 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:48:36.489696  586444 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:48:36.499336  586444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:48:36.614185  586444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:48:36.630803  586444 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699 for IP: 192.168.85.2
	I1213 11:48:36.630868  586444 certs.go:195] generating shared ca certs ...
	I1213 11:48:36.630901  586444 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:48:36.631072  586444 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:48:36.631149  586444 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:48:36.631174  586444 certs.go:257] generating profile certs ...
	I1213 11:48:36.631285  586444 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.key
	I1213 11:48:36.631389  586444 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/apiserver.key.8b85897d
	I1213 11:48:36.631462  586444 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/proxy-client.key
	I1213 11:48:36.631645  586444 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:48:36.631714  586444 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:48:36.631751  586444 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:48:36.631812  586444 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:48:36.631865  586444 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:48:36.631913  586444 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:48:36.631991  586444 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:48:36.632653  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:48:36.660221  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:48:36.680701  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:48:36.702748  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:48:36.728873  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1213 11:48:36.750963  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:48:36.778938  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:48:36.803214  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:48:36.822651  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:48:36.850990  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:48:36.871910  586444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:48:36.891332  586444 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:48:36.909502  586444 ssh_runner.go:195] Run: openssl version
	I1213 11:48:36.917904  586444 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:48:36.927820  586444 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:48:36.936172  586444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:48:36.939917  586444 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:48:36.940008  586444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:48:36.983151  586444 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:48:36.990576  586444 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:48:36.998062  586444 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:48:37.008000  586444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:48:37.012978  586444 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:48:37.013050  586444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:48:37.055827  586444 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:48:37.063485  586444 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:48:37.070840  586444 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:48:37.078769  586444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:48:37.082798  586444 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:48:37.082861  586444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:48:37.129494  586444 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:48:37.137991  586444 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:48:37.141614  586444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:48:37.183155  586444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:48:37.227339  586444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:48:37.278156  586444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:48:37.348466  586444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:48:37.445982  586444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:48:37.512732  586444 kubeadm.go:401] StartCluster: {Name:old-k8s-version-051699 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-051699 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:48:37.512840  586444 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:48:37.512922  586444 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:48:37.568567  586444 cri.go:89] found id: "d1083a84171e8b109d5205f26f918e35b5462caf78b728353423ce03b323617e"
	I1213 11:48:37.568608  586444 cri.go:89] found id: "938d6fe9735fccb0295851287a2c46be9275edddee6d4e17fa2757fee05fb949"
	I1213 11:48:37.568614  586444 cri.go:89] found id: "c6727afa741cfa7ae0dee9ad26ea0a874d1c2fc01c26783d2fd633cfe3f8989c"
	I1213 11:48:37.568618  586444 cri.go:89] found id: "f0ad5667d7443959d98e2dd8e90cf8d0216a7d83e917e88c8d3bcb7016cb1041"
	I1213 11:48:37.568630  586444 cri.go:89] found id: ""
	I1213 11:48:37.568710  586444 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 11:48:37.589176  586444 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:48:37Z" level=error msg="open /run/runc: no such file or directory"
	I1213 11:48:37.589282  586444 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:48:37.603224  586444 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 11:48:37.603293  586444 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 11:48:37.603364  586444 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:48:37.614095  586444 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:48:37.614759  586444 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-051699" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:48:37.615079  586444 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-354468/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-051699" cluster setting kubeconfig missing "old-k8s-version-051699" context setting]
	I1213 11:48:37.615605  586444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:48:37.617153  586444 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:48:37.632138  586444 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 11:48:37.632211  586444 kubeadm.go:602] duration metric: took 28.897644ms to restartPrimaryControlPlane
	I1213 11:48:37.632237  586444 kubeadm.go:403] duration metric: took 119.525865ms to StartCluster
	I1213 11:48:37.632269  586444 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:48:37.632361  586444 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:48:37.633271  586444 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:48:37.633522  586444 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:48:37.633920  586444 config.go:182] Loaded profile config "old-k8s-version-051699": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 11:48:37.633987  586444 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:48:37.634055  586444 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-051699"
	I1213 11:48:37.634064  586444 addons.go:70] Setting dashboard=true in profile "old-k8s-version-051699"
	I1213 11:48:37.634087  586444 addons.go:239] Setting addon dashboard=true in "old-k8s-version-051699"
	W1213 11:48:37.634095  586444 addons.go:248] addon dashboard should already be in state true
	I1213 11:48:37.634137  586444 host.go:66] Checking if "old-k8s-version-051699" exists ...
	I1213 11:48:37.634069  586444 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-051699"
	W1213 11:48:37.634190  586444 addons.go:248] addon storage-provisioner should already be in state true
	I1213 11:48:37.634212  586444 host.go:66] Checking if "old-k8s-version-051699" exists ...
	I1213 11:48:37.634636  586444 cli_runner.go:164] Run: docker container inspect old-k8s-version-051699 --format={{.State.Status}}
	I1213 11:48:37.634730  586444 cli_runner.go:164] Run: docker container inspect old-k8s-version-051699 --format={{.State.Status}}
	I1213 11:48:37.634073  586444 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-051699"
	I1213 11:48:37.635100  586444 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-051699"
	I1213 11:48:37.635356  586444 cli_runner.go:164] Run: docker container inspect old-k8s-version-051699 --format={{.State.Status}}
	I1213 11:48:37.639329  586444 out.go:179] * Verifying Kubernetes components...
	I1213 11:48:37.642545  586444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:48:37.694267  586444 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-051699"
	W1213 11:48:37.694295  586444 addons.go:248] addon default-storageclass should already be in state true
	I1213 11:48:37.694320  586444 host.go:66] Checking if "old-k8s-version-051699" exists ...
	I1213 11:48:37.694768  586444 cli_runner.go:164] Run: docker container inspect old-k8s-version-051699 --format={{.State.Status}}
	I1213 11:48:37.699988  586444 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:48:37.705313  586444 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:48:37.705337  586444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 11:48:37.705413  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:37.713705  586444 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 11:48:37.718193  586444 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 11:48:37.723617  586444 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 11:48:37.723649  586444 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 11:48:37.723723  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:37.743673  586444 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 11:48:37.743697  586444 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 11:48:37.743763  586444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-051699
	I1213 11:48:37.746545  586444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:48:37.783636  586444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:48:37.791090  586444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/old-k8s-version-051699/id_rsa Username:docker}
	I1213 11:48:37.985280  586444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:48:37.997317  586444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:48:38.036496  586444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:48:38.187768  586444 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 11:48:38.187840  586444 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 11:48:38.290108  586444 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 11:48:38.290186  586444 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 11:48:38.321444  586444 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 11:48:38.321516  586444 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 11:48:38.344186  586444 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 11:48:38.344258  586444 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 11:48:38.368085  586444 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 11:48:38.368163  586444 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 11:48:38.384480  586444 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 11:48:38.384501  586444 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 11:48:38.407051  586444 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 11:48:38.407073  586444 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 11:48:38.435160  586444 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 11:48:38.435183  586444 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 11:48:38.459495  586444 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:48:38.459577  586444 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 11:48:38.489900  586444 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:48:43.843978  586444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.858664833s)
	I1213 11:48:43.844033  586444 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.846644812s)
	I1213 11:48:43.844054  586444 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-051699" to be "Ready" ...
	I1213 11:48:43.844378  586444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.807802451s)
	I1213 11:48:43.895367  586444 node_ready.go:49] node "old-k8s-version-051699" is "Ready"
	I1213 11:48:43.895398  586444 node_ready.go:38] duration metric: took 51.327355ms for node "old-k8s-version-051699" to be "Ready" ...
	I1213 11:48:43.895412  586444 api_server.go:52] waiting for apiserver process to appear ...
	I1213 11:48:43.895477  586444 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:48:44.452396  586444 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.962404227s)
	I1213 11:48:44.452505  586444 api_server.go:72] duration metric: took 6.818924941s to wait for apiserver process to appear ...
	I1213 11:48:44.452654  586444 api_server.go:88] waiting for apiserver healthz status ...
	I1213 11:48:44.452682  586444 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 11:48:44.455868  586444 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-051699 addons enable metrics-server
	
	I1213 11:48:44.458888  586444 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1213 11:48:44.461928  586444 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1213 11:48:44.462148  586444 addons.go:530] duration metric: took 6.8281589s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1213 11:48:44.463479  586444 api_server.go:141] control plane version: v1.28.0
	I1213 11:48:44.463505  586444 api_server.go:131] duration metric: took 10.842242ms to wait for apiserver health ...
	I1213 11:48:44.463562  586444 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 11:48:44.473649  586444 system_pods.go:59] 8 kube-system pods found
	I1213 11:48:44.473694  586444 system_pods.go:61] "coredns-5dd5756b68-w2hls" [ae27a521-38ba-4d9d-8b84-6dfc46e48388] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:48:44.473703  586444 system_pods.go:61] "etcd-old-k8s-version-051699" [09a82a43-a427-4010-818e-7d87644712a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 11:48:44.473708  586444 system_pods.go:61] "kindnet-n4ht9" [9abd4e51-c2b6-44ea-8b75-ca7f080370fa] Running
	I1213 11:48:44.473715  586444 system_pods.go:61] "kube-apiserver-old-k8s-version-051699" [e9cac38f-f145-4903-9006-fef05e00da67] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 11:48:44.473722  586444 system_pods.go:61] "kube-controller-manager-old-k8s-version-051699" [fc3d242a-c95b-41f0-a002-ba89275429a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 11:48:44.473727  586444 system_pods.go:61] "kube-proxy-qmcm4" [8ab5345a-ad4d-4d16-9728-12b05b662fc6] Running
	I1213 11:48:44.473732  586444 system_pods.go:61] "kube-scheduler-old-k8s-version-051699" [81301076-b694-4be4-a192-41d334af72ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 11:48:44.473736  586444 system_pods.go:61] "storage-provisioner" [0fb2212c-6b12-43c4-8d5a-575f27bea92e] Running
	I1213 11:48:44.473748  586444 system_pods.go:74] duration metric: took 10.180247ms to wait for pod list to return data ...
	I1213 11:48:44.473756  586444 default_sa.go:34] waiting for default service account to be created ...
	I1213 11:48:44.477278  586444 default_sa.go:45] found service account: "default"
	I1213 11:48:44.477304  586444 default_sa.go:55] duration metric: took 3.541957ms for default service account to be created ...
	I1213 11:48:44.477323  586444 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 11:48:44.480930  586444 system_pods.go:86] 8 kube-system pods found
	I1213 11:48:44.480968  586444 system_pods.go:89] "coredns-5dd5756b68-w2hls" [ae27a521-38ba-4d9d-8b84-6dfc46e48388] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:48:44.480979  586444 system_pods.go:89] "etcd-old-k8s-version-051699" [09a82a43-a427-4010-818e-7d87644712a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 11:48:44.480986  586444 system_pods.go:89] "kindnet-n4ht9" [9abd4e51-c2b6-44ea-8b75-ca7f080370fa] Running
	I1213 11:48:44.480993  586444 system_pods.go:89] "kube-apiserver-old-k8s-version-051699" [e9cac38f-f145-4903-9006-fef05e00da67] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 11:48:44.481002  586444 system_pods.go:89] "kube-controller-manager-old-k8s-version-051699" [fc3d242a-c95b-41f0-a002-ba89275429a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 11:48:44.481007  586444 system_pods.go:89] "kube-proxy-qmcm4" [8ab5345a-ad4d-4d16-9728-12b05b662fc6] Running
	I1213 11:48:44.481022  586444 system_pods.go:89] "kube-scheduler-old-k8s-version-051699" [81301076-b694-4be4-a192-41d334af72ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 11:48:44.481028  586444 system_pods.go:89] "storage-provisioner" [0fb2212c-6b12-43c4-8d5a-575f27bea92e] Running
	I1213 11:48:44.481043  586444 system_pods.go:126] duration metric: took 3.7051ms to wait for k8s-apps to be running ...
	I1213 11:48:44.481056  586444 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 11:48:44.481122  586444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:48:44.509090  586444 system_svc.go:56] duration metric: took 28.023226ms WaitForService to wait for kubelet
	I1213 11:48:44.509124  586444 kubeadm.go:587] duration metric: took 6.875542983s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:48:44.509152  586444 node_conditions.go:102] verifying NodePressure condition ...
	I1213 11:48:44.514046  586444 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 11:48:44.514090  586444 node_conditions.go:123] node cpu capacity is 2
	I1213 11:48:44.514103  586444 node_conditions.go:105] duration metric: took 4.946159ms to run NodePressure ...
	I1213 11:48:44.514117  586444 start.go:242] waiting for startup goroutines ...
	I1213 11:48:44.514125  586444 start.go:247] waiting for cluster config update ...
	I1213 11:48:44.514137  586444 start.go:256] writing updated cluster config ...
	I1213 11:48:44.514467  586444 ssh_runner.go:195] Run: rm -f paused
	I1213 11:48:44.518126  586444 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 11:48:44.524237  586444 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-w2hls" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 11:48:46.530267  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:48:49.029833  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:48:51.032229  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:48:53.530290  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:48:56.031461  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:48:58.031779  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:49:00.072532  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:49:02.530763  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:49:04.536634  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:49:07.031832  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:49:09.529825  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:49:11.530232  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:49:13.530696  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:49:15.531282  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:49:18.031082  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	W1213 11:49:20.530507  586444 pod_ready.go:104] pod "coredns-5dd5756b68-w2hls" is not "Ready", error: <nil>
	I1213 11:49:22.529810  586444 pod_ready.go:94] pod "coredns-5dd5756b68-w2hls" is "Ready"
	I1213 11:49:22.529839  586444 pod_ready.go:86] duration metric: took 38.00552659s for pod "coredns-5dd5756b68-w2hls" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:22.532776  586444 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:22.537887  586444 pod_ready.go:94] pod "etcd-old-k8s-version-051699" is "Ready"
	I1213 11:49:22.537972  586444 pod_ready.go:86] duration metric: took 5.166453ms for pod "etcd-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:22.541072  586444 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:22.545988  586444 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-051699" is "Ready"
	I1213 11:49:22.546017  586444 pod_ready.go:86] duration metric: took 4.91549ms for pod "kube-apiserver-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:22.548982  586444 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:22.729088  586444 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-051699" is "Ready"
	I1213 11:49:22.729162  586444 pod_ready.go:86] duration metric: took 180.153398ms for pod "kube-controller-manager-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:22.928796  586444 pod_ready.go:83] waiting for pod "kube-proxy-qmcm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:23.328217  586444 pod_ready.go:94] pod "kube-proxy-qmcm4" is "Ready"
	I1213 11:49:23.328240  586444 pod_ready.go:86] duration metric: took 399.41932ms for pod "kube-proxy-qmcm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:23.529518  586444 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:23.928175  586444 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-051699" is "Ready"
	I1213 11:49:23.928199  586444 pod_ready.go:86] duration metric: took 398.64942ms for pod "kube-scheduler-old-k8s-version-051699" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:49:23.928214  586444 pod_ready.go:40] duration metric: took 39.410015836s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 11:49:23.987401  586444 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1213 11:49:23.990818  586444 out.go:203] 
	W1213 11:49:23.993667  586444 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1213 11:49:23.996636  586444 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1213 11:49:24.002017  586444 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-051699" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 11:49:16 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:16.758757941Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:49:16 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:16.765077469Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:49:16 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:16.768987232Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:49:16 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:16.788753345Z" level=info msg="Created container db85d9cc53646d47a901958e38428f5fda6f7f752e06768df8eca15233ccc90f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-flst4/dashboard-metrics-scraper" id=f2828112-31a9-417a-8823-32aec0a4241b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:49:16 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:16.789890617Z" level=info msg="Starting container: db85d9cc53646d47a901958e38428f5fda6f7f752e06768df8eca15233ccc90f" id=5bea4732-5bd9-4f28-b16f-f57346aa98ee name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:49:16 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:16.792984908Z" level=info msg="Started container" PID=1642 containerID=db85d9cc53646d47a901958e38428f5fda6f7f752e06768df8eca15233ccc90f description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-flst4/dashboard-metrics-scraper id=5bea4732-5bd9-4f28-b16f-f57346aa98ee name=/runtime.v1.RuntimeService/StartContainer sandboxID=0523e04ddfb60e69900b92a7df6d48b7b63c7c9bcab9cd1e01d37be4402abd3f
	Dec 13 11:49:16 old-k8s-version-051699 conmon[1640]: conmon db85d9cc53646d47a901 <ninfo>: container 1642 exited with status 1
	Dec 13 11:49:17 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:17.038205601Z" level=info msg="Removing container: 3fff06b673f80a386b2d9eaf249010344605196792204fa5dedef44ab12caf9d" id=4135a60c-0591-43e2-9865-8d1634a1ba77 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 11:49:17 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:17.050169165Z" level=info msg="Error loading conmon cgroup of container 3fff06b673f80a386b2d9eaf249010344605196792204fa5dedef44ab12caf9d: cgroup deleted" id=4135a60c-0591-43e2-9865-8d1634a1ba77 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 11:49:17 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:17.054851889Z" level=info msg="Removed container 3fff06b673f80a386b2d9eaf249010344605196792204fa5dedef44ab12caf9d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-flst4/dashboard-metrics-scraper" id=4135a60c-0591-43e2-9865-8d1634a1ba77 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.721755992Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.728098298Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.728129962Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.728165564Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.732239045Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.732271259Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.732292502Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.735609211Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.735642762Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.735663849Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.73870049Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.738729348Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.738752257Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.742496866Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:49:22 old-k8s-version-051699 crio[654]: time="2025-12-13T11:49:22.742534085Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	db85d9cc53646       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago       Exited              dashboard-metrics-scraper   2                   0523e04ddfb60       dashboard-metrics-scraper-5f989dc9cf-flst4       kubernetes-dashboard
	b59f614a2a32a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   2ad816c2b65ba       storage-provisioner                              kube-system
	ea0421efb7eb5       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago       Running             kubernetes-dashboard        0                   4691975b5bdae       kubernetes-dashboard-8694d4445c-jpkrw            kubernetes-dashboard
	13641f9582975       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           58 seconds ago       Running             coredns                     1                   7e01272b7c4b6       coredns-5dd5756b68-w2hls                         kube-system
	c3040e86daf5a       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   c3ed6346f05f8       busybox                                          default
	252ea1238e50c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   2ad816c2b65ba       storage-provisioner                              kube-system
	c2561ac9d9b2c       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           58 seconds ago       Running             kube-proxy                  1                   b49178099cf22       kube-proxy-qmcm4                                 kube-system
	2e30c95faed49       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           58 seconds ago       Running             kindnet-cni                 1                   c767bdae2eb84       kindnet-n4ht9                                    kube-system
	d1083a84171e8       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   a551fb9f1b47e       kube-controller-manager-old-k8s-version-051699   kube-system
	938d6fe9735fc       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   4d6cfe1d75f7d       kube-scheduler-old-k8s-version-051699            kube-system
	c6727afa741cf       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   aa4572210b215       kube-apiserver-old-k8s-version-051699            kube-system
	f0ad5667d7443       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   d32afdb6eb560       etcd-old-k8s-version-051699                      kube-system
	
	
	==> coredns [13641f9582975492cba357f81a47d368b97b186827c5e55ee5221857ac9af3cb] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35276 - 22570 "HINFO IN 4103351345680759076.1825675422965302549. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018285412s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-051699
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-051699
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
	                    minikube.k8s.io/name=old-k8s-version-051699
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T11_47_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 11:47:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-051699
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 11:49:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 11:49:12 +0000   Sat, 13 Dec 2025 11:47:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 11:49:12 +0000   Sat, 13 Dec 2025 11:47:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 11:49:12 +0000   Sat, 13 Dec 2025 11:47:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 11:49:12 +0000   Sat, 13 Dec 2025 11:48:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-051699
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                1c639966-deba-4cb5-95e6-2e08822bad87
	  Boot ID:                    9bd24839-35d9-4392-a0e0-b2e0b9823eaa
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-5dd5756b68-w2hls                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     113s
	  kube-system                 etcd-old-k8s-version-051699                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m8s
	  kube-system                 kindnet-n4ht9                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-old-k8s-version-051699             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-old-k8s-version-051699    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-proxy-qmcm4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-old-k8s-version-051699             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-flst4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-jpkrw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 111s                   kube-proxy       
	  Normal  Starting                 56s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m15s (x8 over 2m15s)  kubelet          Node old-k8s-version-051699 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m15s (x8 over 2m15s)  kubelet          Node old-k8s-version-051699 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m15s (x8 over 2m15s)  kubelet          Node old-k8s-version-051699 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m7s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m6s                   kubelet          Node old-k8s-version-051699 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m6s                   kubelet          Node old-k8s-version-051699 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m6s                   kubelet          Node old-k8s-version-051699 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           114s                   node-controller  Node old-k8s-version-051699 event: Registered Node old-k8s-version-051699 in Controller
	  Normal  NodeReady                99s                    kubelet          Node old-k8s-version-051699 status is now: NodeReady
	  Normal  Starting                 65s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)      kubelet          Node old-k8s-version-051699 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)      kubelet          Node old-k8s-version-051699 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x8 over 65s)      kubelet          Node old-k8s-version-051699 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                    node-controller  Node old-k8s-version-051699 event: Registered Node old-k8s-version-051699 in Controller
	
	
	==> dmesg <==
	[ +27.964028] overlayfs: idmapped layers are currently not supported
	[Dec13 11:16] overlayfs: idmapped layers are currently not supported
	[Dec13 11:20] overlayfs: idmapped layers are currently not supported
	[ +35.182226] overlayfs: idmapped layers are currently not supported
	[Dec13 11:21] overlayfs: idmapped layers are currently not supported
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	[Dec13 11:46] overlayfs: idmapped layers are currently not supported
	[ +24.639766] overlayfs: idmapped layers are currently not supported
	[ +18.732422] overlayfs: idmapped layers are currently not supported
	[Dec13 11:47] overlayfs: idmapped layers are currently not supported
	[Dec13 11:48] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f0ad5667d7443959d98e2dd8e90cf8d0216a7d83e917e88c8d3bcb7016cb1041] <==
	{"level":"info","ts":"2025-12-13T11:48:37.614457Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-13T11:48:37.614668Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-12-13T11:48:37.615694Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-13T11:48:37.617505Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-13T11:48:37.617846Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-13T11:48:37.618919Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-13T11:48:37.626229Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-13T11:48:37.625253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-13T11:48:37.62656Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-13T11:48:37.627293Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-13T11:48:37.627371Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-13T11:48:38.851041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-13T11:48:38.85115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-13T11:48:38.851199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-13T11:48:38.851237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-12-13T11:48:38.851265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-13T11:48:38.851298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-12-13T11:48:38.851329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-12-13T11:48:38.859808Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-051699 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-13T11:48:38.86005Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-13T11:48:38.860257Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-13T11:48:38.889124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-12-13T11:48:38.889635Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-13T11:48:38.891207Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-13T11:48:38.891273Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:49:41 up  3:32,  0 user,  load average: 1.32, 2.11, 2.02
	Linux old-k8s-version-051699 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2e30c95faed4944877f25b1255472ca14b20fc12fbc0176060a435a46b1d39b3] <==
	I1213 11:48:42.524644       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 11:48:42.525234       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 11:48:42.525406       1 main.go:148] setting mtu 1500 for CNI 
	I1213 11:48:42.525418       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 11:48:42.525431       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T11:48:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 11:48:42.722175       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 11:48:42.724139       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 11:48:42.724234       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 11:48:42.724406       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1213 11:49:12.723218       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1213 11:49:12.724326       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1213 11:49:12.724362       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1213 11:49:12.724405       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1213 11:49:14.225418       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 11:49:14.225451       1 metrics.go:72] Registering metrics
	I1213 11:49:14.225524       1 controller.go:711] "Syncing nftables rules"
	I1213 11:49:22.721435       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:49:22.721517       1 main.go:301] handling current node
	I1213 11:49:32.726440       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:49:32.726476       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c6727afa741cfa7ae0dee9ad26ea0a874d1c2fc01c26783d2fd633cfe3f8989c] <==
	I1213 11:48:41.663152       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1213 11:48:41.684838       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1213 11:48:41.687920       1 shared_informer.go:318] Caches are synced for configmaps
	I1213 11:48:41.711707       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1213 11:48:41.711786       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1213 11:48:41.711819       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1213 11:48:41.712220       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1213 11:48:41.714211       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 11:48:41.736301       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1213 11:48:41.736484       1 aggregator.go:166] initial CRD sync complete...
	I1213 11:48:41.736527       1 autoregister_controller.go:141] Starting autoregister controller
	I1213 11:48:41.736563       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 11:48:41.736612       1 cache.go:39] Caches are synced for autoregister controller
	E1213 11:48:41.937690       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1213 11:48:42.337195       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 11:48:44.238784       1 controller.go:624] quota admission added evaluator for: namespaces
	I1213 11:48:44.282223       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1213 11:48:44.313038       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 11:48:44.337058       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 11:48:44.352856       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1213 11:48:44.421645       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.207.228"}
	I1213 11:48:44.443660       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.197.121"}
	I1213 11:48:54.949728       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1213 11:48:55.338208       1 controller.go:624] quota admission added evaluator for: endpoints
	I1213 11:48:55.369019       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d1083a84171e8b109d5205f26f918e35b5462caf78b728353423ce03b323617e] <==
	I1213 11:48:55.220293       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="527.507649ms"
	I1213 11:48:55.220455       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="43.151µs"
	I1213 11:48:55.224905       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-flst4"
	I1213 11:48:55.225008       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-jpkrw"
	I1213 11:48:55.263744       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="306.738352ms"
	I1213 11:48:55.264410       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="302.81642ms"
	I1213 11:48:55.286055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="21.92794ms"
	I1213 11:48:55.286195       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="79.96µs"
	I1213 11:48:55.305291       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="40.8015ms"
	I1213 11:48:55.305437       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.899µs"
	I1213 11:48:55.311688       1 shared_informer.go:318] Caches are synced for garbage collector
	I1213 11:48:55.315876       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.55µs"
	I1213 11:48:55.341605       1 shared_informer.go:318] Caches are synced for garbage collector
	I1213 11:48:55.341751       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1213 11:48:55.350088       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="77.457µs"
	I1213 11:48:55.366697       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1213 11:49:02.032803       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="24.080341ms"
	I1213 11:49:02.032895       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="44.333µs"
	I1213 11:49:06.018447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.481µs"
	I1213 11:49:07.009220       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.223µs"
	I1213 11:49:08.011580       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.607µs"
	I1213 11:49:17.053568       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.951µs"
	I1213 11:49:22.287987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.154678ms"
	I1213 11:49:22.288147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="41.132µs"
	I1213 11:49:26.769027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.548µs"
	
	
	==> kube-proxy [c2561ac9d9b2c9534552136b22e55b5102dd7123c91b9a47f4f6f0d17845ca3c] <==
	I1213 11:48:43.514628       1 server_others.go:69] "Using iptables proxy"
	I1213 11:48:43.636173       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1213 11:48:44.284415       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 11:48:44.323378       1 server_others.go:152] "Using iptables Proxier"
	I1213 11:48:44.330441       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1213 11:48:44.333165       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1213 11:48:44.339582       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1213 11:48:44.343150       1 server.go:846] "Version info" version="v1.28.0"
	I1213 11:48:44.343484       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:48:44.363009       1 config.go:188] "Starting service config controller"
	I1213 11:48:44.363129       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1213 11:48:44.363214       1 config.go:97] "Starting endpoint slice config controller"
	I1213 11:48:44.363269       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1213 11:48:44.368314       1 config.go:315] "Starting node config controller"
	I1213 11:48:44.368409       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1213 11:48:44.463406       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1213 11:48:44.463647       1 shared_informer.go:318] Caches are synced for service config
	I1213 11:48:44.469008       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [938d6fe9735fccb0295851287a2c46be9275edddee6d4e17fa2757fee05fb949] <==
	I1213 11:48:41.268655       1 serving.go:348] Generated self-signed cert in-memory
	I1213 11:48:44.839307       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1213 11:48:44.839338       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:48:44.843329       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1213 11:48:44.843424       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1213 11:48:44.843499       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:48:44.843581       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1213 11:48:44.843630       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 11:48:44.843659       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1213 11:48:44.844124       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1213 11:48:44.844226       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1213 11:48:44.944152       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1213 11:48:44.944152       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1213 11:48:44.944180       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 13 11:48:55 old-k8s-version-051699 kubelet[784]: E1213 11:48:55.266139     784 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-051699" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-051699' and this object
	Dec 13 11:48:55 old-k8s-version-051699 kubelet[784]: I1213 11:48:55.373318     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3881b100-2d7e-4826-81ca-33ce091f0e54-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-flst4\" (UID: \"3881b100-2d7e-4826-81ca-33ce091f0e54\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-flst4"
	Dec 13 11:48:55 old-k8s-version-051699 kubelet[784]: I1213 11:48:55.373514     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw28w\" (UniqueName: \"kubernetes.io/projected/76679ef8-1925-4cdb-9473-1acc7e6609c7-kube-api-access-cw28w\") pod \"kubernetes-dashboard-8694d4445c-jpkrw\" (UID: \"76679ef8-1925-4cdb-9473-1acc7e6609c7\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jpkrw"
	Dec 13 11:48:55 old-k8s-version-051699 kubelet[784]: I1213 11:48:55.373640     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zvbn\" (UniqueName: \"kubernetes.io/projected/3881b100-2d7e-4826-81ca-33ce091f0e54-kube-api-access-2zvbn\") pod \"dashboard-metrics-scraper-5f989dc9cf-flst4\" (UID: \"3881b100-2d7e-4826-81ca-33ce091f0e54\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-flst4"
	Dec 13 11:48:55 old-k8s-version-051699 kubelet[784]: I1213 11:48:55.373758     784 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/76679ef8-1925-4cdb-9473-1acc7e6609c7-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-jpkrw\" (UID: \"76679ef8-1925-4cdb-9473-1acc7e6609c7\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jpkrw"
	Dec 13 11:48:56 old-k8s-version-051699 kubelet[784]: W1213 11:48:56.481251     784 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b/crio-4691975b5bdaeedf277f3b34a406d72b4576d3886314a4cf011877d7656825f7 WatchSource:0}: Error finding container 4691975b5bdaeedf277f3b34a406d72b4576d3886314a4cf011877d7656825f7: Status 404 returned error can't find the container with id 4691975b5bdaeedf277f3b34a406d72b4576d3886314a4cf011877d7656825f7
	Dec 13 11:48:56 old-k8s-version-051699 kubelet[784]: W1213 11:48:56.772638     784 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/5e184c16699de9fc964894f7fa2513ad31b5b8cf6fb0d06983fd2be6a98ed91b/crio-0523e04ddfb60e69900b92a7df6d48b7b63c7c9bcab9cd1e01d37be4402abd3f WatchSource:0}: Error finding container 0523e04ddfb60e69900b92a7df6d48b7b63c7c9bcab9cd1e01d37be4402abd3f: Status 404 returned error can't find the container with id 0523e04ddfb60e69900b92a7df6d48b7b63c7c9bcab9cd1e01d37be4402abd3f
	Dec 13 11:49:05 old-k8s-version-051699 kubelet[784]: I1213 11:49:05.987566     784 scope.go:117] "RemoveContainer" containerID="0fa8ef9d70e639cff07e6e1fe1dce5e70aef612fc7c15db1de207a7ba40eaf0a"
	Dec 13 11:49:06 old-k8s-version-051699 kubelet[784]: I1213 11:49:06.018092     784 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jpkrw" podStartSLOduration=6.074268937 podCreationTimestamp="2025-12-13 11:48:55 +0000 UTC" firstStartedPulling="2025-12-13 11:48:56.486140592 +0000 UTC m=+19.852544610" lastFinishedPulling="2025-12-13 11:49:01.428591996 +0000 UTC m=+24.794996014" observedRunningTime="2025-12-13 11:49:02.006975317 +0000 UTC m=+25.373379351" watchObservedRunningTime="2025-12-13 11:49:06.016720341 +0000 UTC m=+29.383124367"
	Dec 13 11:49:06 old-k8s-version-051699 kubelet[784]: I1213 11:49:06.991505     784 scope.go:117] "RemoveContainer" containerID="3fff06b673f80a386b2d9eaf249010344605196792204fa5dedef44ab12caf9d"
	Dec 13 11:49:06 old-k8s-version-051699 kubelet[784]: I1213 11:49:06.992489     784 scope.go:117] "RemoveContainer" containerID="0fa8ef9d70e639cff07e6e1fe1dce5e70aef612fc7c15db1de207a7ba40eaf0a"
	Dec 13 11:49:06 old-k8s-version-051699 kubelet[784]: E1213 11:49:06.993366     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-flst4_kubernetes-dashboard(3881b100-2d7e-4826-81ca-33ce091f0e54)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-flst4" podUID="3881b100-2d7e-4826-81ca-33ce091f0e54"
	Dec 13 11:49:07 old-k8s-version-051699 kubelet[784]: I1213 11:49:07.994738     784 scope.go:117] "RemoveContainer" containerID="3fff06b673f80a386b2d9eaf249010344605196792204fa5dedef44ab12caf9d"
	Dec 13 11:49:07 old-k8s-version-051699 kubelet[784]: E1213 11:49:07.995012     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-flst4_kubernetes-dashboard(3881b100-2d7e-4826-81ca-33ce091f0e54)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-flst4" podUID="3881b100-2d7e-4826-81ca-33ce091f0e54"
	Dec 13 11:49:14 old-k8s-version-051699 kubelet[784]: I1213 11:49:14.018292     784 scope.go:117] "RemoveContainer" containerID="252ea1238e50c637074d8e48f4c01b8d464d784a390cecebc97a291bc3d45d6c"
	Dec 13 11:49:16 old-k8s-version-051699 kubelet[784]: I1213 11:49:16.754709     784 scope.go:117] "RemoveContainer" containerID="3fff06b673f80a386b2d9eaf249010344605196792204fa5dedef44ab12caf9d"
	Dec 13 11:49:17 old-k8s-version-051699 kubelet[784]: I1213 11:49:17.030603     784 scope.go:117] "RemoveContainer" containerID="3fff06b673f80a386b2d9eaf249010344605196792204fa5dedef44ab12caf9d"
	Dec 13 11:49:17 old-k8s-version-051699 kubelet[784]: I1213 11:49:17.030853     784 scope.go:117] "RemoveContainer" containerID="db85d9cc53646d47a901958e38428f5fda6f7f752e06768df8eca15233ccc90f"
	Dec 13 11:49:17 old-k8s-version-051699 kubelet[784]: E1213 11:49:17.031121     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-flst4_kubernetes-dashboard(3881b100-2d7e-4826-81ca-33ce091f0e54)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-flst4" podUID="3881b100-2d7e-4826-81ca-33ce091f0e54"
	Dec 13 11:49:26 old-k8s-version-051699 kubelet[784]: I1213 11:49:26.755098     784 scope.go:117] "RemoveContainer" containerID="db85d9cc53646d47a901958e38428f5fda6f7f752e06768df8eca15233ccc90f"
	Dec 13 11:49:26 old-k8s-version-051699 kubelet[784]: E1213 11:49:26.755885     784 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-flst4_kubernetes-dashboard(3881b100-2d7e-4826-81ca-33ce091f0e54)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-flst4" podUID="3881b100-2d7e-4826-81ca-33ce091f0e54"
	Dec 13 11:49:36 old-k8s-version-051699 kubelet[784]: I1213 11:49:36.295339     784 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 13 11:49:36 old-k8s-version-051699 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 11:49:36 old-k8s-version-051699 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 11:49:36 old-k8s-version-051699 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [ea0421efb7eb56b9e66dde1a483a1434ed846923a3c65b69a736c2f66a0ecb91] <==
	2025/12/13 11:49:01 Starting overwatch
	2025/12/13 11:49:01 Using namespace: kubernetes-dashboard
	2025/12/13 11:49:01 Using in-cluster config to connect to apiserver
	2025/12/13 11:49:01 Using secret token for csrf signing
	2025/12/13 11:49:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 11:49:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 11:49:01 Successful initial request to the apiserver, version: v1.28.0
	2025/12/13 11:49:01 Generating JWE encryption key
	2025/12/13 11:49:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 11:49:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 11:49:02 Initializing JWE encryption key from synchronized object
	2025/12/13 11:49:02 Creating in-cluster Sidecar client
	2025/12/13 11:49:02 Serving insecurely on HTTP port: 9090
	2025/12/13 11:49:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 11:49:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [252ea1238e50c637074d8e48f4c01b8d464d784a390cecebc97a291bc3d45d6c] <==
	I1213 11:48:43.060130       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 11:49:13.161828       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b59f614a2a32ae75a805e01d493986cd39e6a71e3aed6253427f6024f7790b2e] <==
	I1213 11:49:14.074430       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 11:49:14.093606       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 11:49:14.094731       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 11:49:31.493108       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 11:49:31.493358       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-051699_cc8eedc4-c218-4a8a-818b-b095a84d5222!
	I1213 11:49:31.493503       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c9bc71b9-ee13-4321-8101-70d105400c33", APIVersion:"v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-051699_cc8eedc4-c218-4a8a-818b-b095a84d5222 became leader
	I1213 11:49:31.594417       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-051699_cc8eedc4-c218-4a8a-818b-b095a84d5222!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-051699 -n old-k8s-version-051699
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-051699 -n old-k8s-version-051699: exit status 2 (427.515702ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-051699 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-151605 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-151605 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (288.023528ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:50:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-151605 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-151605 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-151605 describe deploy/metrics-server -n kube-system: exit status 1 (122.869842ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-151605 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-151605
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-151605:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d",
	        "Created": "2025-12-13T11:49:50.135294946Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 590646,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:49:50.20055644Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d/hosts",
	        "LogPath": "/var/lib/docker/containers/ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d/ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d-json.log",
	        "Name": "/default-k8s-diff-port-151605",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-151605:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-151605",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d",
	                "LowerDir": "/var/lib/docker/overlay2/5071f61a5ba74b8c26a46a195fc7ce2d5b47f49b801b792da027543cd1611276-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5071f61a5ba74b8c26a46a195fc7ce2d5b47f49b801b792da027543cd1611276/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5071f61a5ba74b8c26a46a195fc7ce2d5b47f49b801b792da027543cd1611276/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5071f61a5ba74b8c26a46a195fc7ce2d5b47f49b801b792da027543cd1611276/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-151605",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-151605/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-151605",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-151605",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-151605",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c7107b6621e2a28ceeadec35ad08f2f564c19944d43217c180ac0369fba4e233",
	            "SandboxKey": "/var/run/docker/netns/c7107b6621e2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-151605": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:40:af:69:0e:7d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e01ba379a4d94c8de18912da562a485bb057ae2af70e58b76f1547550548184",
	                    "EndpointID": "40c6530cf09984676960fd84e2fec7ece0ee4a5fc45d7f33206d633d970cd014",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-151605",
	                        "ed91f41ddcee"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-151605 -n default-k8s-diff-port-151605
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-151605 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-151605 logs -n 25: (1.216182992s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-062409 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-062409                │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ ssh     │ -p cilium-062409 sudo crio config                                                                                                                                                                                                             │ cilium-062409                │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │                     │
	│ delete  │ -p cilium-062409                                                                                                                                                                                                                              │ cilium-062409                │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │ 13 Dec 25 11:45 UTC │
	│ start   │ -p force-systemd-env-181508 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-181508     │ jenkins │ v1.37.0 │ 13 Dec 25 11:45 UTC │ 13 Dec 25 11:46 UTC │
	│ delete  │ -p kubernetes-upgrade-854588                                                                                                                                                                                                                  │ kubernetes-upgrade-854588    │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ start   │ -p cert-expiration-420007 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ delete  │ -p force-systemd-env-181508                                                                                                                                                                                                                   │ force-systemd-env-181508     │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ start   │ -p cert-options-522461 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-522461          │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ ssh     │ cert-options-522461 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-522461          │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:47 UTC │
	│ ssh     │ -p cert-options-522461 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-522461          │ jenkins │ v1.37.0 │ 13 Dec 25 11:47 UTC │ 13 Dec 25 11:47 UTC │
	│ delete  │ -p cert-options-522461                                                                                                                                                                                                                        │ cert-options-522461          │ jenkins │ v1.37.0 │ 13 Dec 25 11:47 UTC │ 13 Dec 25 11:47 UTC │
	│ start   │ -p old-k8s-version-051699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:47 UTC │ 13 Dec 25 11:48 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-051699 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │                     │
	│ stop    │ -p old-k8s-version-051699 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │ 13 Dec 25 11:48 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-051699 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │ 13 Dec 25 11:48 UTC │
	│ start   │ -p old-k8s-version-051699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │ 13 Dec 25 11:49 UTC │
	│ image   │ old-k8s-version-051699 image list --format=json                                                                                                                                                                                               │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ pause   │ -p old-k8s-version-051699 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │                     │
	│ delete  │ -p old-k8s-version-051699                                                                                                                                                                                                                     │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ delete  │ -p old-k8s-version-051699                                                                                                                                                                                                                     │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p cert-expiration-420007 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ delete  │ -p cert-expiration-420007                                                                                                                                                                                                                     │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-151605 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:50:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:50:10.436397  593357 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:50:10.436751  593357 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:50:10.436766  593357 out.go:374] Setting ErrFile to fd 2...
	I1213 11:50:10.436775  593357 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:50:10.437045  593357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:50:10.437483  593357 out.go:368] Setting JSON to false
	I1213 11:50:10.438378  593357 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12763,"bootTime":1765613848,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:50:10.438453  593357 start.go:143] virtualization:  
	I1213 11:50:10.442013  593357 out.go:179] * [embed-certs-326948] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:50:10.446186  593357 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:50:10.446384  593357 notify.go:221] Checking for updates...
	I1213 11:50:10.453052  593357 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:50:10.456160  593357 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:50:10.459435  593357 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:50:10.462427  593357 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:50:10.465452  593357 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:50:10.468926  593357 config.go:182] Loaded profile config "default-k8s-diff-port-151605": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:50:10.469087  593357 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:50:10.515617  593357 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:50:10.515752  593357 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:50:10.613330  593357 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 11:50:10.602267047 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:50:10.613432  593357 docker.go:319] overlay module found
	I1213 11:50:10.616663  593357 out.go:179] * Using the docker driver based on user configuration
	I1213 11:50:10.619498  593357 start.go:309] selected driver: docker
	I1213 11:50:10.619584  593357 start.go:927] validating driver "docker" against <nil>
	I1213 11:50:10.619600  593357 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:50:10.620310  593357 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:50:10.720554  593357 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 11:50:10.708367204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:50:10.720712  593357 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 11:50:10.720925  593357 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:50:10.723817  593357 out.go:179] * Using Docker driver with root privileges
	I1213 11:50:10.726719  593357 cni.go:84] Creating CNI manager for ""
	I1213 11:50:10.726785  593357 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:50:10.726797  593357 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 11:50:10.726875  593357 start.go:353] cluster config:
	{Name:embed-certs-326948 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-326948 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:50:10.730025  593357 out.go:179] * Starting "embed-certs-326948" primary control-plane node in "embed-certs-326948" cluster
	I1213 11:50:10.732835  593357 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:50:10.735666  593357 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:50:10.738444  593357 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 11:50:10.738493  593357 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 11:50:10.738519  593357 cache.go:65] Caching tarball of preloaded images
	I1213 11:50:10.738608  593357 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 11:50:10.738623  593357 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 11:50:10.738735  593357 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/config.json ...
	I1213 11:50:10.738759  593357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/config.json: {Name:mk7e325f67ea75a1cfe7ba83f57d0a294688b40c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:50:10.738915  593357 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:50:10.759782  593357 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:50:10.759808  593357 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:50:10.759825  593357 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:50:10.759860  593357 start.go:360] acquireMachinesLock for embed-certs-326948: {Name:mk006cdb726d13b418884982bd33ef960e248469 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:50:10.759961  593357 start.go:364] duration metric: took 80.846µs to acquireMachinesLock for "embed-certs-326948"
	I1213 11:50:10.759992  593357 start.go:93] Provisioning new machine with config: &{Name:embed-certs-326948 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-326948 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:50:10.760071  593357 start.go:125] createHost starting for "" (driver="docker")
	I1213 11:50:10.763404  593357 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 11:50:10.763683  593357 start.go:159] libmachine.API.Create for "embed-certs-326948" (driver="docker")
	I1213 11:50:10.763715  593357 client.go:173] LocalClient.Create starting
	I1213 11:50:10.763783  593357 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem
	I1213 11:50:10.763821  593357 main.go:143] libmachine: Decoding PEM data...
	I1213 11:50:10.763842  593357 main.go:143] libmachine: Parsing certificate...
	I1213 11:50:10.763902  593357 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem
	I1213 11:50:10.763925  593357 main.go:143] libmachine: Decoding PEM data...
	I1213 11:50:10.763947  593357 main.go:143] libmachine: Parsing certificate...
	I1213 11:50:10.764317  593357 cli_runner.go:164] Run: docker network inspect embed-certs-326948 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 11:50:10.781132  593357 cli_runner.go:211] docker network inspect embed-certs-326948 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 11:50:10.781226  593357 network_create.go:284] running [docker network inspect embed-certs-326948] to gather additional debugging logs...
	I1213 11:50:10.781249  593357 cli_runner.go:164] Run: docker network inspect embed-certs-326948
	W1213 11:50:10.801228  593357 cli_runner.go:211] docker network inspect embed-certs-326948 returned with exit code 1
	I1213 11:50:10.801263  593357 network_create.go:287] error running [docker network inspect embed-certs-326948]: docker network inspect embed-certs-326948: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-326948 not found
	I1213 11:50:10.801277  593357 network_create.go:289] output of [docker network inspect embed-certs-326948]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-326948 not found
	
	** /stderr **
	I1213 11:50:10.801374  593357 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:50:10.822011  593357 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0545902499c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:32:4c:cb:8d:7b} reservation:<nil>}
	I1213 11:50:10.822407  593357 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-de5fe2fbe3b8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:54:47:7f:e7:3a} reservation:<nil>}
	I1213 11:50:10.822655  593357 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7c96683190e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:0a:60:46:c5:4a} reservation:<nil>}
	I1213 11:50:10.823080  593357 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001977a50}
	I1213 11:50:10.823107  593357 network_create.go:124] attempt to create docker network embed-certs-326948 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 11:50:10.823163  593357 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-326948 embed-certs-326948
	I1213 11:50:10.889480  593357 network_create.go:108] docker network embed-certs-326948 192.168.76.0/24 created
	I1213 11:50:10.889512  593357 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-326948" container
	I1213 11:50:10.889598  593357 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 11:50:10.909174  593357 cli_runner.go:164] Run: docker volume create embed-certs-326948 --label name.minikube.sigs.k8s.io=embed-certs-326948 --label created_by.minikube.sigs.k8s.io=true
	I1213 11:50:10.931629  593357 oci.go:103] Successfully created a docker volume embed-certs-326948
	I1213 11:50:10.931739  593357 cli_runner.go:164] Run: docker run --rm --name embed-certs-326948-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-326948 --entrypoint /usr/bin/test -v embed-certs-326948:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 11:50:11.552071  593357 oci.go:107] Successfully prepared a docker volume embed-certs-326948
	I1213 11:50:11.552136  593357 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 11:50:11.552147  593357 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 11:50:11.552217  593357 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-326948:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 11:50:15.910201  593357 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-326948:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.35794492s)
	I1213 11:50:15.910235  593357 kic.go:203] duration metric: took 4.358085499s to extract preloaded images to volume ...
	W1213 11:50:15.910381  593357 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 11:50:15.910480  593357 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 11:50:16.015772  593357 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-326948 --name embed-certs-326948 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-326948 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-326948 --network embed-certs-326948 --ip 192.168.76.2 --volume embed-certs-326948:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 11:50:16.498223  593357 cli_runner.go:164] Run: docker container inspect embed-certs-326948 --format={{.State.Running}}
	I1213 11:50:16.530191  593357 cli_runner.go:164] Run: docker container inspect embed-certs-326948 --format={{.State.Status}}
	I1213 11:50:16.564003  593357 cli_runner.go:164] Run: docker exec embed-certs-326948 stat /var/lib/dpkg/alternatives/iptables
	I1213 11:50:16.644656  593357 oci.go:144] the created container "embed-certs-326948" has a running status.
	I1213 11:50:16.644685  593357 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa...
	I1213 11:50:17.133383  593357 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 11:50:17.176425  593357 cli_runner.go:164] Run: docker container inspect embed-certs-326948 --format={{.State.Status}}
	I1213 11:50:17.213610  593357 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 11:50:17.213634  593357 kic_runner.go:114] Args: [docker exec --privileged embed-certs-326948 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 11:50:17.307641  593357 cli_runner.go:164] Run: docker container inspect embed-certs-326948 --format={{.State.Status}}
	I1213 11:50:17.330200  593357 machine.go:94] provisionDockerMachine start ...
	I1213 11:50:17.330308  593357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:50:17.354316  593357 main.go:143] libmachine: Using SSH client type: native
	I1213 11:50:17.354656  593357 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1213 11:50:17.354666  593357 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:50:17.355368  593357 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 11:50:21.760128  590024 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 11:50:21.760189  590024 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:50:21.760290  590024 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:50:21.760354  590024 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:50:21.760397  590024 kubeadm.go:319] OS: Linux
	I1213 11:50:21.760446  590024 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:50:21.760500  590024 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:50:21.760550  590024 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:50:21.760610  590024 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:50:21.760673  590024 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:50:21.760732  590024 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:50:21.760789  590024 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:50:21.760845  590024 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:50:21.760901  590024 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:50:21.760994  590024 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:50:21.761099  590024 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:50:21.761200  590024 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:50:21.761268  590024 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:50:21.764606  590024 out.go:252]   - Generating certificates and keys ...
	I1213 11:50:21.764870  590024 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:50:21.764955  590024 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:50:21.765024  590024 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:50:21.765085  590024 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:50:21.765174  590024 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:50:21.765258  590024 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 11:50:21.765339  590024 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 11:50:21.765483  590024 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-151605 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 11:50:21.765547  590024 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 11:50:21.765676  590024 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-151605 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 11:50:21.765742  590024 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 11:50:21.766010  590024 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 11:50:21.766123  590024 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 11:50:21.766247  590024 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:50:21.766390  590024 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:50:21.766509  590024 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:50:21.766616  590024 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:50:21.766716  590024 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:50:21.766832  590024 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:50:21.767018  590024 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:50:21.767192  590024 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:50:21.773967  590024 out.go:252]   - Booting up control plane ...
	I1213 11:50:21.774081  590024 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:50:21.774170  590024 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:50:21.774248  590024 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:50:21.774370  590024 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:50:21.774468  590024 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:50:21.774576  590024 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:50:21.774664  590024 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:50:21.774707  590024 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:50:21.774848  590024 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:50:21.774956  590024 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:50:21.775021  590024 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.509559857s
	I1213 11:50:21.775112  590024 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 11:50:21.775191  590024 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1213 11:50:21.775286  590024 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 11:50:21.775376  590024 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 11:50:21.775452  590024 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.815713716s
	I1213 11:50:21.775528  590024 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.742693966s
	I1213 11:50:21.775596  590024 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.502031825s
	I1213 11:50:21.775714  590024 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 11:50:21.775871  590024 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 11:50:21.775943  590024 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 11:50:21.776141  590024 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-151605 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 11:50:21.776201  590024 kubeadm.go:319] [bootstrap-token] Using token: srca9t.cya3czcgs6l0b62b
	I1213 11:50:21.779400  590024 out.go:252]   - Configuring RBAC rules ...
	I1213 11:50:21.779615  590024 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 11:50:21.779721  590024 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 11:50:21.779877  590024 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 11:50:21.780020  590024 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 11:50:21.780149  590024 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 11:50:21.780246  590024 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 11:50:21.780375  590024 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 11:50:21.780424  590024 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 11:50:21.780473  590024 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 11:50:21.780481  590024 kubeadm.go:319] 
	I1213 11:50:21.780547  590024 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 11:50:21.780554  590024 kubeadm.go:319] 
	I1213 11:50:21.780646  590024 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 11:50:21.780654  590024 kubeadm.go:319] 
	I1213 11:50:21.780682  590024 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 11:50:21.780750  590024 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 11:50:21.780809  590024 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 11:50:21.780817  590024 kubeadm.go:319] 
	I1213 11:50:21.780875  590024 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 11:50:21.780883  590024 kubeadm.go:319] 
	I1213 11:50:21.780941  590024 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 11:50:21.780950  590024 kubeadm.go:319] 
	I1213 11:50:21.781007  590024 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 11:50:21.781091  590024 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 11:50:21.781177  590024 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 11:50:21.781186  590024 kubeadm.go:319] 
	I1213 11:50:21.781277  590024 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 11:50:21.781370  590024 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 11:50:21.781378  590024 kubeadm.go:319] 
	I1213 11:50:21.781471  590024 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token srca9t.cya3czcgs6l0b62b \
	I1213 11:50:21.781582  590024 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a3798e8f4868c7e4585b4327b4f0565e5125112465fbf26ae2f7c9b7fec5e169 \
	I1213 11:50:21.781604  590024 kubeadm.go:319] 	--control-plane 
	I1213 11:50:21.781607  590024 kubeadm.go:319] 
	I1213 11:50:21.781699  590024 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 11:50:21.781703  590024 kubeadm.go:319] 
	I1213 11:50:21.781795  590024 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token srca9t.cya3czcgs6l0b62b \
	I1213 11:50:21.781925  590024 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a3798e8f4868c7e4585b4327b4f0565e5125112465fbf26ae2f7c9b7fec5e169 
	I1213 11:50:21.781939  590024 cni.go:84] Creating CNI manager for ""
	I1213 11:50:21.781946  590024 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:50:21.785363  590024 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 11:50:20.515455  593357 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-326948
	
	I1213 11:50:20.515567  593357 ubuntu.go:182] provisioning hostname "embed-certs-326948"
	I1213 11:50:20.515658  593357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:50:20.540013  593357 main.go:143] libmachine: Using SSH client type: native
	I1213 11:50:20.540337  593357 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1213 11:50:20.540356  593357 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-326948 && echo "embed-certs-326948" | sudo tee /etc/hostname
	I1213 11:50:20.714173  593357 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-326948
	
	I1213 11:50:20.714327  593357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:50:20.735599  593357 main.go:143] libmachine: Using SSH client type: native
	I1213 11:50:20.737696  593357 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1213 11:50:20.737732  593357 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-326948' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-326948/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-326948' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:50:20.907834  593357 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:50:20.907857  593357 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 11:50:20.907877  593357 ubuntu.go:190] setting up certificates
	I1213 11:50:20.907893  593357 provision.go:84] configureAuth start
	I1213 11:50:20.907949  593357 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-326948
	I1213 11:50:20.930637  593357 provision.go:143] copyHostCerts
	I1213 11:50:20.930700  593357 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 11:50:20.930709  593357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 11:50:20.930870  593357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 11:50:20.930991  593357 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 11:50:20.930998  593357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 11:50:20.931027  593357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 11:50:20.931089  593357 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 11:50:20.931094  593357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 11:50:20.931117  593357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 11:50:20.931176  593357 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.embed-certs-326948 san=[127.0.0.1 192.168.76.2 embed-certs-326948 localhost minikube]
	I1213 11:50:21.152318  593357 provision.go:177] copyRemoteCerts
	I1213 11:50:21.152419  593357 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:50:21.152518  593357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:50:21.177918  593357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:50:21.296984  593357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:50:21.325389  593357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:50:21.357486  593357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:50:21.379334  593357 provision.go:87] duration metric: took 471.426124ms to configureAuth
	I1213 11:50:21.379363  593357 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:50:21.379613  593357 config.go:182] Loaded profile config "embed-certs-326948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:50:21.379742  593357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:50:21.401963  593357 main.go:143] libmachine: Using SSH client type: native
	I1213 11:50:21.402297  593357 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1213 11:50:21.402318  593357 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:50:21.738476  593357 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:50:21.738501  593357 machine.go:97] duration metric: took 4.408280567s to provisionDockerMachine
	I1213 11:50:21.738512  593357 client.go:176] duration metric: took 10.974785126s to LocalClient.Create
	I1213 11:50:21.738525  593357 start.go:167] duration metric: took 10.974844522s to libmachine.API.Create "embed-certs-326948"
	I1213 11:50:21.738533  593357 start.go:293] postStartSetup for "embed-certs-326948" (driver="docker")
	I1213 11:50:21.738542  593357 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:50:21.738610  593357 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:50:21.738659  593357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:50:21.766035  593357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:50:21.889595  593357 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:50:21.904348  593357 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:50:21.904378  593357 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:50:21.904391  593357 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 11:50:21.904463  593357 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 11:50:21.904592  593357 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 11:50:21.904728  593357 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:50:21.915319  593357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:50:21.940281  593357 start.go:296] duration metric: took 201.734104ms for postStartSetup
	I1213 11:50:21.940642  593357 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-326948
	I1213 11:50:21.965330  593357 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/config.json ...
	I1213 11:50:21.965659  593357 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:50:21.965722  593357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:50:21.988532  593357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:50:22.101836  593357 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:50:22.108005  593357 start.go:128] duration metric: took 11.347914944s to createHost
	I1213 11:50:22.108032  593357 start.go:83] releasing machines lock for "embed-certs-326948", held for 11.348056887s
	I1213 11:50:22.108128  593357 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-326948
	I1213 11:50:22.133759  593357 ssh_runner.go:195] Run: cat /version.json
	I1213 11:50:22.133818  593357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:50:22.134069  593357 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:50:22.134131  593357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:50:22.171426  593357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:50:22.171490  593357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:50:22.414170  593357 ssh_runner.go:195] Run: systemctl --version
	I1213 11:50:22.425840  593357 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:50:22.513888  593357 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:50:22.521610  593357 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:50:22.521689  593357 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:50:22.578904  593357 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 11:50:22.578946  593357 start.go:496] detecting cgroup driver to use...
	I1213 11:50:22.578999  593357 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:50:22.579068  593357 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:50:22.606041  593357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:50:22.620284  593357 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:50:22.620360  593357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:50:22.638051  593357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:50:22.659368  593357 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:50:22.789600  593357 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:50:22.950972  593357 docker.go:234] disabling docker service ...
	I1213 11:50:22.951111  593357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:50:22.972796  593357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:50:22.989090  593357 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:50:23.151693  593357 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:50:23.274442  593357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:50:23.289616  593357 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:50:23.307395  593357 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 11:50:23.307572  593357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:50:23.317474  593357 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:50:23.317546  593357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:50:23.327055  593357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:50:23.338877  593357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:50:23.350035  593357 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:50:23.360300  593357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:50:23.369207  593357 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:50:23.386037  593357 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:50:23.401206  593357 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:50:23.412879  593357 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:50:23.420722  593357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:50:23.557143  593357 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:50:23.724311  593357 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:50:23.724436  593357 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:50:23.728735  593357 start.go:564] Will wait 60s for crictl version
	I1213 11:50:23.728883  593357 ssh_runner.go:195] Run: which crictl
	I1213 11:50:23.733992  593357 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:50:23.765094  593357 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:50:23.765249  593357 ssh_runner.go:195] Run: crio --version
	I1213 11:50:23.796084  593357 ssh_runner.go:195] Run: crio --version
	I1213 11:50:23.830594  593357 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 11:50:23.833542  593357 cli_runner.go:164] Run: docker network inspect embed-certs-326948 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:50:23.850906  593357 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 11:50:23.854930  593357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:50:23.864670  593357 kubeadm.go:884] updating cluster {Name:embed-certs-326948 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-326948 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:50:23.864795  593357 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 11:50:23.864850  593357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:50:23.901285  593357 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:50:23.901310  593357 crio.go:433] Images already preloaded, skipping extraction
	I1213 11:50:23.901367  593357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:50:23.934565  593357 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:50:23.934590  593357 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:50:23.934599  593357 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1213 11:50:23.934690  593357 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-326948 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-326948 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:50:23.934780  593357 ssh_runner.go:195] Run: crio config
	I1213 11:50:24.000724  593357 cni.go:84] Creating CNI manager for ""
	I1213 11:50:24.000746  593357 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:50:24.000762  593357 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:50:24.000833  593357 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-326948 NodeName:embed-certs-326948 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:50:24.000981  593357 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-326948"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:50:24.001059  593357 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 11:50:24.012738  593357 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:50:24.012853  593357 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:50:24.022444  593357 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1213 11:50:24.043133  593357 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:50:24.059944  593357 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1213 11:50:24.078186  593357 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:50:24.082510  593357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:50:24.096062  593357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:50:24.220337  593357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:50:24.236352  593357 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948 for IP: 192.168.76.2
	I1213 11:50:24.236375  593357 certs.go:195] generating shared ca certs ...
	I1213 11:50:24.236392  593357 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:50:24.236604  593357 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:50:24.236684  593357 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:50:24.236699  593357 certs.go:257] generating profile certs ...
	I1213 11:50:24.236776  593357 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/client.key
	I1213 11:50:24.236811  593357 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/client.crt with IP's: []
	I1213 11:50:24.537383  593357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/client.crt ...
	I1213 11:50:24.537416  593357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/client.crt: {Name:mkeff112fda622cb28d75f0efc8dd212f7ddfc12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:50:24.537600  593357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/client.key ...
	I1213 11:50:24.537615  593357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/client.key: {Name:mke19721fb707c321b46d3f206bd90a5cc99dcd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:50:24.537696  593357 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/apiserver.key.dff061d2
	I1213 11:50:24.537715  593357 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/apiserver.crt.dff061d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 11:50:24.700918  593357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/apiserver.crt.dff061d2 ...
	I1213 11:50:24.700953  593357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/apiserver.crt.dff061d2: {Name:mk216ba033a242ae414b057f9d21ed026c4d9020 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:50:24.701180  593357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/apiserver.key.dff061d2 ...
	I1213 11:50:24.701199  593357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/apiserver.key.dff061d2: {Name:mk60bc6d2a1aeb1ab3c8a4226a9294f40c0dea19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:50:24.701290  593357 certs.go:382] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/apiserver.crt.dff061d2 -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/apiserver.crt
	I1213 11:50:24.701373  593357 certs.go:386] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/apiserver.key.dff061d2 -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/apiserver.key
	I1213 11:50:24.701466  593357 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/proxy-client.key
	I1213 11:50:24.701484  593357 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/proxy-client.crt with IP's: []
	I1213 11:50:24.926367  593357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/proxy-client.crt ...
	I1213 11:50:24.926402  593357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/proxy-client.crt: {Name:mke63b493e99a39b80a0d87657779ef53567ca04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:50:24.926599  593357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/proxy-client.key ...
	I1213 11:50:24.926618  593357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/proxy-client.key: {Name:mk02d2c76ee2cf064c441b71905e7db552a0b854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:50:24.926816  593357 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:50:24.926868  593357 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:50:24.926882  593357 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:50:24.926911  593357 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:50:24.926941  593357 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:50:24.926971  593357 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:50:24.927031  593357 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:50:24.927707  593357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:50:24.946792  593357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:50:24.965895  593357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:50:24.995551  593357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:50:25.027300  593357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1213 11:50:25.074969  593357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:50:25.114627  593357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:50:25.141338  593357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:50:25.163543  593357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:50:25.183057  593357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:50:25.202838  593357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:50:25.221116  593357 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:50:25.235216  593357 ssh_runner.go:195] Run: openssl version
	I1213 11:50:25.242966  593357 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:50:25.250261  593357 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:50:25.257541  593357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:50:25.261303  593357 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:50:25.261390  593357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:50:25.303330  593357 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:50:25.311228  593357 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/356328.pem /etc/ssl/certs/51391683.0
	I1213 11:50:25.318702  593357 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:50:25.326372  593357 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:50:25.335061  593357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:50:25.338907  593357 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:50:25.339021  593357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:50:25.386841  593357 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:50:25.395013  593357 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3563282.pem /etc/ssl/certs/3ec20f2e.0
	I1213 11:50:25.402607  593357 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:50:25.410230  593357 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:50:25.417804  593357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:50:25.421308  593357 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:50:25.421373  593357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:50:21.788364  590024 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 11:50:21.792901  590024 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 11:50:21.792924  590024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1213 11:50:21.812269  590024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 11:50:22.275452  590024 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 11:50:22.275630  590024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:50:22.275718  590024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-151605 minikube.k8s.io/updated_at=2025_12_13T11_50_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b minikube.k8s.io/name=default-k8s-diff-port-151605 minikube.k8s.io/primary=true
	I1213 11:50:22.477745  590024 ops.go:34] apiserver oom_adj: -16
	I1213 11:50:22.477871  590024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:50:22.978462  590024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:50:23.478922  590024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:50:23.978457  590024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:50:24.478455  590024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:50:24.978920  590024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:50:25.478816  590024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:50:25.464305  593357 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:50:25.472410  593357 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 11:50:25.484394  593357 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:50:25.492698  593357 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 11:50:25.492747  593357 kubeadm.go:401] StartCluster: {Name:embed-certs-326948 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-326948 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:50:25.492820  593357 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:50:25.492889  593357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:50:25.530786  593357 cri.go:89] found id: ""
	I1213 11:50:25.530866  593357 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:50:25.542067  593357 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:50:25.556057  593357 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:50:25.556126  593357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:50:25.574153  593357 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:50:25.574174  593357 kubeadm.go:158] found existing configuration files:
	
	I1213 11:50:25.574226  593357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:50:25.587292  593357 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:50:25.587358  593357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:50:25.599608  593357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:50:25.609218  593357 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:50:25.609283  593357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:50:25.623304  593357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:50:25.636100  593357 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:50:25.636165  593357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:50:25.650143  593357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:50:25.659294  593357 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:50:25.659360  593357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:50:25.667772  593357 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:50:25.744933  593357 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 11:50:25.745086  593357 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:50:25.792604  593357 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:50:25.792676  593357 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:50:25.792713  593357 kubeadm.go:319] OS: Linux
	I1213 11:50:25.792761  593357 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:50:25.792815  593357 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:50:25.792865  593357 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:50:25.792915  593357 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:50:25.792965  593357 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:50:25.793014  593357 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:50:25.793061  593357 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:50:25.793110  593357 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:50:25.793158  593357 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:50:25.877393  593357 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:50:25.877506  593357 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:50:25.877604  593357 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:50:25.887328  593357 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:50:25.977915  590024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:50:26.115152  590024 kubeadm.go:1114] duration metric: took 3.839564176s to wait for elevateKubeSystemPrivileges
	I1213 11:50:26.115180  590024 kubeadm.go:403] duration metric: took 25.596895044s to StartCluster
	I1213 11:50:26.115195  590024 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:50:26.115257  590024 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:50:26.115951  590024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:50:26.116151  590024 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:50:26.116239  590024 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 11:50:26.116481  590024 config.go:182] Loaded profile config "default-k8s-diff-port-151605": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:50:26.116519  590024 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:50:26.116580  590024 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-151605"
	I1213 11:50:26.116598  590024 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-151605"
	I1213 11:50:26.116619  590024 host.go:66] Checking if "default-k8s-diff-port-151605" exists ...
	I1213 11:50:26.117413  590024 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-151605 --format={{.State.Status}}
	I1213 11:50:26.117583  590024 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-151605"
	I1213 11:50:26.117609  590024 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-151605"
	I1213 11:50:26.117892  590024 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-151605 --format={{.State.Status}}
	I1213 11:50:26.121412  590024 out.go:179] * Verifying Kubernetes components...
	I1213 11:50:26.129831  590024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:50:26.161648  590024 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-151605"
	I1213 11:50:26.161687  590024 host.go:66] Checking if "default-k8s-diff-port-151605" exists ...
	I1213 11:50:26.162150  590024 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-151605 --format={{.State.Status}}
	I1213 11:50:26.175195  590024 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:50:26.179401  590024 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:50:26.179425  590024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 11:50:26.179490  590024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-151605
	I1213 11:50:26.197473  590024 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 11:50:26.197500  590024 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 11:50:26.197568  590024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-151605
	I1213 11:50:26.213682  590024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/default-k8s-diff-port-151605/id_rsa Username:docker}
	I1213 11:50:26.237372  590024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/default-k8s-diff-port-151605/id_rsa Username:docker}
	I1213 11:50:26.602237  590024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:50:26.768993  590024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:50:26.809113  590024 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 11:50:26.809229  590024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:50:28.038645  590024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.436372548s)
	I1213 11:50:28.038711  590024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.269695881s)
	I1213 11:50:28.039048  590024 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.229796083s)
	I1213 11:50:28.039800  590024 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-151605" to be "Ready" ...
	I1213 11:50:28.040041  590024 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.230897866s)
	I1213 11:50:28.040067  590024 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1213 11:50:28.109043  590024 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1213 11:50:25.892328  593357 out.go:252]   - Generating certificates and keys ...
	I1213 11:50:25.892438  593357 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:50:25.892515  593357 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:50:25.983170  593357 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:50:27.267405  593357 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:50:28.154780  593357 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:50:29.366801  593357 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 11:50:29.707976  593357 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 11:50:29.708325  593357 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-326948 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 11:50:30.383850  593357 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 11:50:30.383987  593357 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-326948 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 11:50:28.111069  590024 addons.go:530] duration metric: took 1.994544733s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1213 11:50:28.545957  590024 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-151605" context rescaled to 1 replicas
	W1213 11:50:30.047459  590024 node_ready.go:57] node "default-k8s-diff-port-151605" has "Ready":"False" status (will retry)
	I1213 11:50:30.529259  593357 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 11:50:31.379878  593357 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 11:50:31.671394  593357 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 11:50:31.671721  593357 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:50:32.152218  593357 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:50:32.794579  593357 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:50:33.013603  593357 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:50:33.420062  593357 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:50:34.337606  593357 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:50:34.338434  593357 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:50:34.341312  593357 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:50:34.344614  593357 out.go:252]   - Booting up control plane ...
	I1213 11:50:34.344714  593357 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:50:34.344809  593357 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:50:34.344877  593357 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:50:34.361024  593357 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:50:34.361388  593357 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:50:34.369143  593357 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:50:34.369463  593357 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:50:34.369513  593357 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:50:34.517858  593357 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:50:34.517986  593357 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1213 11:50:32.542837  590024 node_ready.go:57] node "default-k8s-diff-port-151605" has "Ready":"False" status (will retry)
	W1213 11:50:34.543734  590024 node_ready.go:57] node "default-k8s-diff-port-151605" has "Ready":"False" status (will retry)
	I1213 11:50:36.518041  593357 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000764034s
	I1213 11:50:36.526334  593357 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 11:50:36.526461  593357 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1213 11:50:36.526582  593357 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 11:50:36.526675  593357 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1213 11:50:37.042924  590024 node_ready.go:57] node "default-k8s-diff-port-151605" has "Ready":"False" status (will retry)
	W1213 11:50:39.043196  590024 node_ready.go:57] node "default-k8s-diff-port-151605" has "Ready":"False" status (will retry)
	I1213 11:50:41.024350  593357 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.496552052s
	I1213 11:50:42.174020  593357 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.647637642s
	I1213 11:50:44.032589  593357 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.504978414s
	I1213 11:50:44.069962  593357 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 11:50:44.091062  593357 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 11:50:44.108036  593357 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 11:50:44.108240  593357 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-326948 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 11:50:44.124990  593357 kubeadm.go:319] [bootstrap-token] Using token: t98phw.2qq3wderzd63igpg
	W1213 11:50:41.543019  590024 node_ready.go:57] node "default-k8s-diff-port-151605" has "Ready":"False" status (will retry)
	I1213 11:50:42.043670  590024 node_ready.go:49] node "default-k8s-diff-port-151605" is "Ready"
	I1213 11:50:42.043698  590024 node_ready.go:38] duration metric: took 14.003872785s for node "default-k8s-diff-port-151605" to be "Ready" ...
	I1213 11:50:42.043711  590024 api_server.go:52] waiting for apiserver process to appear ...
	I1213 11:50:42.043777  590024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:50:42.078860  590024 api_server.go:72] duration metric: took 15.962680608s to wait for apiserver process to appear ...
	I1213 11:50:42.078889  590024 api_server.go:88] waiting for apiserver healthz status ...
	I1213 11:50:42.078911  590024 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1213 11:50:42.090463  590024 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1213 11:50:42.099232  590024 api_server.go:141] control plane version: v1.34.2
	I1213 11:50:42.099263  590024 api_server.go:131] duration metric: took 20.366215ms to wait for apiserver health ...
	I1213 11:50:42.099273  590024 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 11:50:42.107756  590024 system_pods.go:59] 8 kube-system pods found
	I1213 11:50:42.107857  590024 system_pods.go:61] "coredns-66bc5c9577-pr2h6" [c52a25e5-9ec2-476b-bce3-7e7a2129e082] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:50:42.107884  590024 system_pods.go:61] "etcd-default-k8s-diff-port-151605" [3fafb6ec-d98c-414d-a043-ee1e78a09887] Running
	I1213 11:50:42.107930  590024 system_pods.go:61] "kindnet-4bq9f" [dc9ca822-d910-4583-afbc-ea67e425f553] Running
	I1213 11:50:42.107961  590024 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-151605" [23a6d49a-8eb0-4c8d-bd32-f1033e9ada92] Running
	I1213 11:50:42.107986  590024 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-151605" [eff2dec0-908d-4a8c-8681-7fd4756d2244] Running
	I1213 11:50:42.108021  590024 system_pods.go:61] "kube-proxy-7sl78" [fa439c4d-470b-4cd1-868b-45116bb9e6f5] Running
	I1213 11:50:42.108049  590024 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-151605" [aebb198d-e921-464a-a681-7dcb8c244579] Running
	I1213 11:50:42.108075  590024 system_pods.go:61] "storage-provisioner" [9a39a61a-2538-4bf8-ab07-ccfbbd952666] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 11:50:42.108115  590024 system_pods.go:74] duration metric: took 8.834224ms to wait for pod list to return data ...
	I1213 11:50:42.108160  590024 default_sa.go:34] waiting for default service account to be created ...
	I1213 11:50:42.116697  590024 default_sa.go:45] found service account: "default"
	I1213 11:50:42.116784  590024 default_sa.go:55] duration metric: took 8.566603ms for default service account to be created ...
	I1213 11:50:42.116831  590024 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 11:50:42.125629  590024 system_pods.go:86] 8 kube-system pods found
	I1213 11:50:42.125735  590024 system_pods.go:89] "coredns-66bc5c9577-pr2h6" [c52a25e5-9ec2-476b-bce3-7e7a2129e082] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:50:42.125763  590024 system_pods.go:89] "etcd-default-k8s-diff-port-151605" [3fafb6ec-d98c-414d-a043-ee1e78a09887] Running
	I1213 11:50:42.125804  590024 system_pods.go:89] "kindnet-4bq9f" [dc9ca822-d910-4583-afbc-ea67e425f553] Running
	I1213 11:50:42.125833  590024 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-151605" [23a6d49a-8eb0-4c8d-bd32-f1033e9ada92] Running
	I1213 11:50:42.125857  590024 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-151605" [eff2dec0-908d-4a8c-8681-7fd4756d2244] Running
	I1213 11:50:42.125893  590024 system_pods.go:89] "kube-proxy-7sl78" [fa439c4d-470b-4cd1-868b-45116bb9e6f5] Running
	I1213 11:50:42.125921  590024 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-151605" [aebb198d-e921-464a-a681-7dcb8c244579] Running
	I1213 11:50:42.125952  590024 system_pods.go:89] "storage-provisioner" [9a39a61a-2538-4bf8-ab07-ccfbbd952666] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 11:50:42.126011  590024 retry.go:31] will retry after 259.482207ms: missing components: kube-dns
	I1213 11:50:42.396089  590024 system_pods.go:86] 8 kube-system pods found
	I1213 11:50:42.396177  590024 system_pods.go:89] "coredns-66bc5c9577-pr2h6" [c52a25e5-9ec2-476b-bce3-7e7a2129e082] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:50:42.396206  590024 system_pods.go:89] "etcd-default-k8s-diff-port-151605" [3fafb6ec-d98c-414d-a043-ee1e78a09887] Running
	I1213 11:50:42.396248  590024 system_pods.go:89] "kindnet-4bq9f" [dc9ca822-d910-4583-afbc-ea67e425f553] Running
	I1213 11:50:42.396271  590024 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-151605" [23a6d49a-8eb0-4c8d-bd32-f1033e9ada92] Running
	I1213 11:50:42.396293  590024 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-151605" [eff2dec0-908d-4a8c-8681-7fd4756d2244] Running
	I1213 11:50:42.396372  590024 system_pods.go:89] "kube-proxy-7sl78" [fa439c4d-470b-4cd1-868b-45116bb9e6f5] Running
	I1213 11:50:42.396413  590024 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-151605" [aebb198d-e921-464a-a681-7dcb8c244579] Running
	I1213 11:50:42.396441  590024 system_pods.go:89] "storage-provisioner" [9a39a61a-2538-4bf8-ab07-ccfbbd952666] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 11:50:42.396486  590024 retry.go:31] will retry after 281.718292ms: missing components: kube-dns
	I1213 11:50:42.683293  590024 system_pods.go:86] 8 kube-system pods found
	I1213 11:50:42.683331  590024 system_pods.go:89] "coredns-66bc5c9577-pr2h6" [c52a25e5-9ec2-476b-bce3-7e7a2129e082] Running
	I1213 11:50:42.683338  590024 system_pods.go:89] "etcd-default-k8s-diff-port-151605" [3fafb6ec-d98c-414d-a043-ee1e78a09887] Running
	I1213 11:50:42.683343  590024 system_pods.go:89] "kindnet-4bq9f" [dc9ca822-d910-4583-afbc-ea67e425f553] Running
	I1213 11:50:42.683348  590024 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-151605" [23a6d49a-8eb0-4c8d-bd32-f1033e9ada92] Running
	I1213 11:50:42.683352  590024 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-151605" [eff2dec0-908d-4a8c-8681-7fd4756d2244] Running
	I1213 11:50:42.683356  590024 system_pods.go:89] "kube-proxy-7sl78" [fa439c4d-470b-4cd1-868b-45116bb9e6f5] Running
	I1213 11:50:42.683360  590024 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-151605" [aebb198d-e921-464a-a681-7dcb8c244579] Running
	I1213 11:50:42.683365  590024 system_pods.go:89] "storage-provisioner" [9a39a61a-2538-4bf8-ab07-ccfbbd952666] Running
	I1213 11:50:42.683374  590024 system_pods.go:126] duration metric: took 566.523534ms to wait for k8s-apps to be running ...
	I1213 11:50:42.683384  590024 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 11:50:42.683457  590024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:50:42.698760  590024 system_svc.go:56] duration metric: took 15.347232ms WaitForService to wait for kubelet
	I1213 11:50:42.698803  590024 kubeadm.go:587] duration metric: took 16.582620679s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:50:42.698826  590024 node_conditions.go:102] verifying NodePressure condition ...
	I1213 11:50:42.704594  590024 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 11:50:42.704635  590024 node_conditions.go:123] node cpu capacity is 2
	I1213 11:50:42.704648  590024 node_conditions.go:105] duration metric: took 5.81665ms to run NodePressure ...
	I1213 11:50:42.704662  590024 start.go:242] waiting for startup goroutines ...
	I1213 11:50:42.704670  590024 start.go:247] waiting for cluster config update ...
	I1213 11:50:42.704686  590024 start.go:256] writing updated cluster config ...
	I1213 11:50:42.705039  590024 ssh_runner.go:195] Run: rm -f paused
	I1213 11:50:42.709565  590024 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 11:50:42.713106  590024 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pr2h6" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:50:42.717855  590024 pod_ready.go:94] pod "coredns-66bc5c9577-pr2h6" is "Ready"
	I1213 11:50:42.717887  590024 pod_ready.go:86] duration metric: took 4.74968ms for pod "coredns-66bc5c9577-pr2h6" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:50:42.720393  590024 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:50:42.732772  590024 pod_ready.go:94] pod "etcd-default-k8s-diff-port-151605" is "Ready"
	I1213 11:50:42.732812  590024 pod_ready.go:86] duration metric: took 12.382927ms for pod "etcd-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:50:42.737389  590024 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:50:42.742991  590024 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-151605" is "Ready"
	I1213 11:50:42.743026  590024 pod_ready.go:86] duration metric: took 5.602626ms for pod "kube-apiserver-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:50:42.745976  590024 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:50:43.114435  590024 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-151605" is "Ready"
	I1213 11:50:43.114482  590024 pod_ready.go:86] duration metric: took 368.477129ms for pod "kube-controller-manager-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:50:43.313341  590024 pod_ready.go:83] waiting for pod "kube-proxy-7sl78" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:50:43.714064  590024 pod_ready.go:94] pod "kube-proxy-7sl78" is "Ready"
	I1213 11:50:43.714099  590024 pod_ready.go:86] duration metric: took 400.723016ms for pod "kube-proxy-7sl78" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:50:43.913053  590024 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:50:44.314078  590024 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-151605" is "Ready"
	I1213 11:50:44.314115  590024 pod_ready.go:86] duration metric: took 401.033296ms for pod "kube-scheduler-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:50:44.314134  590024 pod_ready.go:40] duration metric: took 1.60452912s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 11:50:44.385614  590024 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1213 11:50:44.392465  590024 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-151605" cluster and "default" namespace by default
	I1213 11:50:44.127917  593357 out.go:252]   - Configuring RBAC rules ...
	I1213 11:50:44.128039  593357 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 11:50:44.136092  593357 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 11:50:44.148255  593357 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 11:50:44.152373  593357 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 11:50:44.157119  593357 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 11:50:44.161498  593357 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 11:50:44.442186  593357 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 11:50:44.910531  593357 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 11:50:45.445896  593357 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 11:50:45.446784  593357 kubeadm.go:319] 
	I1213 11:50:45.446866  593357 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 11:50:45.446898  593357 kubeadm.go:319] 
	I1213 11:50:45.446984  593357 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 11:50:45.447028  593357 kubeadm.go:319] 
	I1213 11:50:45.447054  593357 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 11:50:45.447150  593357 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 11:50:45.447241  593357 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 11:50:45.447260  593357 kubeadm.go:319] 
	I1213 11:50:45.447374  593357 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 11:50:45.447428  593357 kubeadm.go:319] 
	I1213 11:50:45.447483  593357 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 11:50:45.447571  593357 kubeadm.go:319] 
	I1213 11:50:45.447633  593357 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 11:50:45.447747  593357 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 11:50:45.447875  593357 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 11:50:45.447933  593357 kubeadm.go:319] 
	I1213 11:50:45.448025  593357 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 11:50:45.448125  593357 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 11:50:45.448172  593357 kubeadm.go:319] 
	I1213 11:50:45.448263  593357 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token t98phw.2qq3wderzd63igpg \
	I1213 11:50:45.448389  593357 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a3798e8f4868c7e4585b4327b4f0565e5125112465fbf26ae2f7c9b7fec5e169 \
	I1213 11:50:45.448410  593357 kubeadm.go:319] 	--control-plane 
	I1213 11:50:45.448417  593357 kubeadm.go:319] 
	I1213 11:50:45.448536  593357 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 11:50:45.448542  593357 kubeadm.go:319] 
	I1213 11:50:45.448631  593357 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token t98phw.2qq3wderzd63igpg \
	I1213 11:50:45.448755  593357 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a3798e8f4868c7e4585b4327b4f0565e5125112465fbf26ae2f7c9b7fec5e169 
	I1213 11:50:45.453785  593357 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 11:50:45.454098  593357 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:50:45.454224  593357 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:50:45.454246  593357 cni.go:84] Creating CNI manager for ""
	I1213 11:50:45.454254  593357 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:50:45.459331  593357 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 11:50:45.462281  593357 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 11:50:45.467082  593357 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 11:50:45.467101  593357 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1213 11:50:45.482053  593357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 11:50:45.803850  593357 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 11:50:45.803982  593357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:50:45.804076  593357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-326948 minikube.k8s.io/updated_at=2025_12_13T11_50_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b minikube.k8s.io/name=embed-certs-326948 minikube.k8s.io/primary=true
	I1213 11:50:45.825540  593357 ops.go:34] apiserver oom_adj: -16
	I1213 11:50:45.966326  593357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:50:46.466789  593357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:50:46.966925  593357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:50:47.467124  593357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:50:47.966955  593357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:50:48.466461  593357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:50:48.967287  593357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:50:49.466899  593357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:50:49.966901  593357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 11:50:50.065217  593357 kubeadm.go:1114] duration metric: took 4.261279558s to wait for elevateKubeSystemPrivileges
	I1213 11:50:50.065253  593357 kubeadm.go:403] duration metric: took 24.572503562s to StartCluster
	I1213 11:50:50.065270  593357 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:50:50.065332  593357 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:50:50.066724  593357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:50:50.066968  593357 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:50:50.067122  593357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 11:50:50.067376  593357 config.go:182] Loaded profile config "embed-certs-326948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:50:50.067419  593357 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:50:50.067477  593357 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-326948"
	I1213 11:50:50.067496  593357 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-326948"
	I1213 11:50:50.067556  593357 host.go:66] Checking if "embed-certs-326948" exists ...
	I1213 11:50:50.068368  593357 cli_runner.go:164] Run: docker container inspect embed-certs-326948 --format={{.State.Status}}
	I1213 11:50:50.068514  593357 addons.go:70] Setting default-storageclass=true in profile "embed-certs-326948"
	I1213 11:50:50.068528  593357 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-326948"
	I1213 11:50:50.068761  593357 cli_runner.go:164] Run: docker container inspect embed-certs-326948 --format={{.State.Status}}
	I1213 11:50:50.071136  593357 out.go:179] * Verifying Kubernetes components...
	I1213 11:50:50.074421  593357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:50:50.111610  593357 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:50:50.114532  593357 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:50:50.114554  593357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 11:50:50.114609  593357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:50:50.115359  593357 addons.go:239] Setting addon default-storageclass=true in "embed-certs-326948"
	I1213 11:50:50.115396  593357 host.go:66] Checking if "embed-certs-326948" exists ...
	I1213 11:50:50.117167  593357 cli_runner.go:164] Run: docker container inspect embed-certs-326948 --format={{.State.Status}}
	I1213 11:50:50.147669  593357 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 11:50:50.147690  593357 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 11:50:50.147874  593357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:50:50.167648  593357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:50:50.182342  593357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:50:50.453580  593357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 11:50:50.453697  593357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:50:50.482492  593357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:50:50.543050  593357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:50:51.159731  593357 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1213 11:50:51.160720  593357 node_ready.go:35] waiting up to 6m0s for node "embed-certs-326948" to be "Ready" ...
	I1213 11:50:51.587808  593357 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.105224651s)
	I1213 11:50:51.587877  593357 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.044802801s)
	I1213 11:50:51.600335  593357 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Dec 13 11:50:42 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:42.253779257Z" level=info msg="Created container 2515f79a04f939502bbcae35e5d753c5f2322435b2b327d76ba2f61abef29752: kube-system/coredns-66bc5c9577-pr2h6/coredns" id=f9921915-3843-46c8-9f82-5bfc5fc82194 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:50:42 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:42.254977313Z" level=info msg="Starting container: 2515f79a04f939502bbcae35e5d753c5f2322435b2b327d76ba2f61abef29752" id=d8d9276b-006a-429a-a924-add2c3485908 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:50:42 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:42.261750557Z" level=info msg="Started container" PID=1782 containerID=2515f79a04f939502bbcae35e5d753c5f2322435b2b327d76ba2f61abef29752 description=kube-system/coredns-66bc5c9577-pr2h6/coredns id=d8d9276b-006a-429a-a924-add2c3485908 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2ffe9991140e957cf3a574f33d6fbfdfdeff28f3ea4a1a5155c7ad476eed273f
	Dec 13 11:50:44 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:44.991829299Z" level=info msg="Running pod sandbox: default/busybox/POD" id=33474d19-43fd-4b3b-ab33-219c31e80265 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 11:50:44 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:44.991902129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:50:45 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:45.001938363Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fe76732628d8a81ecab9ce02cc78b6e1b507e52ea4a19e35957c9c54f8068503 UID:0e627c31-a482-4a58-a8f5-410ea307b7ed NetNS:/var/run/netns/f999f79e-837a-4fc1-b10d-dde8c8e10a85 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000497f40}] Aliases:map[]}"
	Dec 13 11:50:45 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:45.002160191Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 13 11:50:45 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:45.065230396Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fe76732628d8a81ecab9ce02cc78b6e1b507e52ea4a19e35957c9c54f8068503 UID:0e627c31-a482-4a58-a8f5-410ea307b7ed NetNS:/var/run/netns/f999f79e-837a-4fc1-b10d-dde8c8e10a85 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000497f40}] Aliases:map[]}"
	Dec 13 11:50:45 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:45.065472712Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 13 11:50:45 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:45.07539473Z" level=info msg="Ran pod sandbox fe76732628d8a81ecab9ce02cc78b6e1b507e52ea4a19e35957c9c54f8068503 with infra container: default/busybox/POD" id=33474d19-43fd-4b3b-ab33-219c31e80265 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 11:50:45 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:45.081510952Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=10d1566a-48cb-45b5-89a9-9dee3b45e20c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:50:45 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:45.081715836Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=10d1566a-48cb-45b5-89a9-9dee3b45e20c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:50:45 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:45.08177895Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=10d1566a-48cb-45b5-89a9-9dee3b45e20c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:50:45 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:45.084735821Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a119f7a3-f71d-4bc4-8c5d-78066ddefebd name=/runtime.v1.ImageService/PullImage
	Dec 13 11:50:45 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:45.088513688Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 13 11:50:47 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:47.180069904Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=a119f7a3-f71d-4bc4-8c5d-78066ddefebd name=/runtime.v1.ImageService/PullImage
	Dec 13 11:50:47 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:47.181092574Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=857fb431-adf2-4300-be77-6b927e2a88ac name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:50:47 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:47.18394743Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c9843f27-24bd-4539-b732-0f02100ee447 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:50:47 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:47.190439163Z" level=info msg="Creating container: default/busybox/busybox" id=dea69b6e-ff26-4658-ace8-7057f9da0590 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:50:47 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:47.190554109Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:50:47 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:47.195440866Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:50:47 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:47.196232871Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:50:47 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:47.217296274Z" level=info msg="Created container 2334fa5a985159e163d09b003413ff02a92da7afee9f69f0e92e163c68b23cfa: default/busybox/busybox" id=dea69b6e-ff26-4658-ace8-7057f9da0590 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:50:47 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:47.219749223Z" level=info msg="Starting container: 2334fa5a985159e163d09b003413ff02a92da7afee9f69f0e92e163c68b23cfa" id=821a4fdd-96a2-4c63-a5e6-73df71f886a6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:50:47 default-k8s-diff-port-151605 crio[840]: time="2025-12-13T11:50:47.22332269Z" level=info msg="Started container" PID=1842 containerID=2334fa5a985159e163d09b003413ff02a92da7afee9f69f0e92e163c68b23cfa description=default/busybox/busybox id=821a4fdd-96a2-4c63-a5e6-73df71f886a6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fe76732628d8a81ecab9ce02cc78b6e1b507e52ea4a19e35957c9c54f8068503
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	2334fa5a98515       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   fe76732628d8a       busybox                                                default
	2515f79a04f93       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago      Running             coredns                   0                   2ffe9991140e9       coredns-66bc5c9577-pr2h6                               kube-system
	9526c77709b63       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   703adf4874519       storage-provisioner                                    kube-system
	1a6a5760c2bff       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    23 seconds ago      Running             kindnet-cni               0                   4ba2bf82796f8       kindnet-4bq9f                                          kube-system
	cf606bced2292       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                      25 seconds ago      Running             kube-proxy                0                   1e5afe1b2585d       kube-proxy-7sl78                                       kube-system
	d7049d5aa8748       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                      41 seconds ago      Running             kube-apiserver            0                   a6d3a81f05d70       kube-apiserver-default-k8s-diff-port-151605            kube-system
	40b42dcd3891a       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                      41 seconds ago      Running             kube-controller-manager   0                   4d1edb27bf3a2       kube-controller-manager-default-k8s-diff-port-151605   kube-system
	4b3153451e9f1       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                      41 seconds ago      Running             kube-scheduler            0                   066e38c06086b       kube-scheduler-default-k8s-diff-port-151605            kube-system
	2e7eb0084e7be       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                      41 seconds ago      Running             etcd                      0                   e7d2baecc4373       etcd-default-k8s-diff-port-151605                      kube-system
	
	
	==> coredns [2515f79a04f939502bbcae35e5d753c5f2322435b2b327d76ba2f61abef29752] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51739 - 6892 "HINFO IN 1610163409290444238.3520479731393325633. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.076852649s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-151605
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-151605
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
	                    minikube.k8s.io/name=default-k8s-diff-port-151605
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T11_50_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 11:50:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-151605
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 11:50:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 11:50:52 +0000   Sat, 13 Dec 2025 11:50:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 11:50:52 +0000   Sat, 13 Dec 2025 11:50:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 11:50:52 +0000   Sat, 13 Dec 2025 11:50:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 11:50:52 +0000   Sat, 13 Dec 2025 11:50:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-151605
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                a385de42-c8e0-4943-b893-df4c54e93d41
	  Boot ID:                    9bd24839-35d9-4392-a0e0-b2e0b9823eaa
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-pr2h6                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-default-k8s-diff-port-151605                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-4bq9f                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-default-k8s-diff-port-151605             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-151605    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-7sl78                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-default-k8s-diff-port-151605             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Warning  CgroupV1                 43s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     43s (x8 over 43s)  kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasSufficientPID
	  Normal   Starting                 33s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 33s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s                kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           29s                node-controller  Node default-k8s-diff-port-151605 event: Registered Node default-k8s-diff-port-151605 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-151605 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec13 11:20] overlayfs: idmapped layers are currently not supported
	[ +35.182226] overlayfs: idmapped layers are currently not supported
	[Dec13 11:21] overlayfs: idmapped layers are currently not supported
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	[Dec13 11:46] overlayfs: idmapped layers are currently not supported
	[ +24.639766] overlayfs: idmapped layers are currently not supported
	[ +18.732422] overlayfs: idmapped layers are currently not supported
	[Dec13 11:47] overlayfs: idmapped layers are currently not supported
	[Dec13 11:48] overlayfs: idmapped layers are currently not supported
	[Dec13 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.618483] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2e7eb0084e7beef5345a246d0357b62b35af40525d4776c8239042661a0a78dd] <==
	{"level":"warn","ts":"2025-12-13T11:50:15.855827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:15.892522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:15.915654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:15.989342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:16.054856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:16.093977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:16.144099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:16.158381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:16.195553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:16.274199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:16.300216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:16.336080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:16.370987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:16.446183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:16.457952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:16.557562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:16.607238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:16.632213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:16.710069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:16.759952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:16.876689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:16.906002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:16.966475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:17.068189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:17.320012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33886","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:50:54 up  3:33,  0 user,  load average: 3.34, 2.70, 2.25
	Linux default-k8s-diff-port-151605 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1a6a5760c2bff477c2aff2e2130e84e07c61afa7a622c15984d1ee0b1100d15a] <==
	I1213 11:50:31.028191       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 11:50:31.028550       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 11:50:31.028741       1 main.go:148] setting mtu 1500 for CNI 
	I1213 11:50:31.028797       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 11:50:31.028831       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T11:50:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 11:50:31.223076       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 11:50:31.223155       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 11:50:31.223191       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 11:50:31.224353       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 11:50:31.423674       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 11:50:31.423778       1 metrics.go:72] Registering metrics
	I1213 11:50:31.423889       1 controller.go:711] "Syncing nftables rules"
	I1213 11:50:41.230972       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:50:41.231103       1 main.go:301] handling current node
	I1213 11:50:51.223737       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:50:51.223799       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d7049d5aa874816abc25834efb635ce47d079e6c4d76da37e9b1071b9c8e17cd] <==
	I1213 11:50:18.746817       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 11:50:18.746824       1 cache.go:39] Caches are synced for autoregister controller
	I1213 11:50:18.789118       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 11:50:18.789325       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1213 11:50:18.818881       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 11:50:18.819481       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 11:50:18.925006       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 11:50:19.352421       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1213 11:50:19.357019       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1213 11:50:19.357046       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 11:50:20.067666       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 11:50:20.130771       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 11:50:20.255196       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1213 11:50:20.263299       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1213 11:50:20.264547       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 11:50:20.269758       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 11:50:20.551092       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 11:50:21.162653       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 11:50:21.230867       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1213 11:50:21.258590       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 11:50:25.729206       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 11:50:25.745268       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 11:50:26.397708       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 11:50:26.609698       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1213 11:50:52.827419       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:56558: use of closed network connection
	
	
	==> kube-controller-manager [40b42dcd3891a68e25e749a3e22321158fc58d51efb95c2c23f4287c7b00b512] <==
	I1213 11:50:25.637830       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1213 11:50:25.637927       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 11:50:25.645953       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 11:50:25.638115       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 11:50:25.664677       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 11:50:25.664819       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1213 11:50:25.664623       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 11:50:25.665040       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 11:50:25.665174       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 11:50:25.646709       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 11:50:25.679824       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1213 11:50:25.683196       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 11:50:25.707729       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 11:50:25.707853       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1213 11:50:25.646512       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 11:50:25.675956       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 11:50:25.708623       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1213 11:50:25.678488       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 11:50:25.666691       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-151605" podCIDRs=["10.244.0.0/24"]
	I1213 11:50:25.717416       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 11:50:25.782950       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 11:50:25.782993       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 11:50:25.783002       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 11:50:25.811699       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 11:50:45.640108       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [cf606bced22927575ebaaf697a13763171b1f945aad89184b5f4638ac2e58d52] <==
	I1213 11:50:28.963217       1 server_linux.go:53] "Using iptables proxy"
	I1213 11:50:29.125318       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 11:50:29.225826       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 11:50:29.225867       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1213 11:50:29.225943       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 11:50:29.441537       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 11:50:29.441594       1 server_linux.go:132] "Using iptables Proxier"
	I1213 11:50:29.511946       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 11:50:29.512275       1 server.go:527] "Version info" version="v1.34.2"
	I1213 11:50:29.512297       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:50:29.544046       1 config.go:200] "Starting service config controller"
	I1213 11:50:29.544066       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 11:50:29.544082       1 config.go:106] "Starting endpoint slice config controller"
	I1213 11:50:29.544087       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 11:50:29.544098       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 11:50:29.544101       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 11:50:29.544758       1 config.go:309] "Starting node config controller"
	I1213 11:50:29.544766       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 11:50:29.544772       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 11:50:29.644662       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 11:50:29.644696       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 11:50:29.644743       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4b3153451e9f1a6c36ab89668dd866d1c0021e800ffefd6da89be8a9476684d8] <==
	E1213 11:50:18.603790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 11:50:18.603899       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 11:50:18.603976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 11:50:18.604059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 11:50:18.604128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 11:50:18.604197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 11:50:18.604338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 11:50:18.604412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 11:50:18.604474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 11:50:18.604527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 11:50:18.604669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 11:50:18.604708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 11:50:18.604820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 11:50:19.470054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 11:50:19.509961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 11:50:19.517304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 11:50:19.588124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 11:50:19.689230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 11:50:19.726557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 11:50:19.726557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 11:50:19.731508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 11:50:19.793479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 11:50:19.795850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 11:50:19.893308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1213 11:50:22.265417       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 11:50:26 default-k8s-diff-port-151605 kubelet[1316]: E1213 11:50:26.935412    1316 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-7sl78\" is forbidden: User \"system:node:default-k8s-diff-port-151605\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-151605' and this object" podUID="fa439c4d-470b-4cd1-868b-45116bb9e6f5" pod="kube-system/kube-proxy-7sl78"
	Dec 13 11:50:26 default-k8s-diff-port-151605 kubelet[1316]: I1213 11:50:26.996087    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fa439c4d-470b-4cd1-868b-45116bb9e6f5-kube-proxy\") pod \"kube-proxy-7sl78\" (UID: \"fa439c4d-470b-4cd1-868b-45116bb9e6f5\") " pod="kube-system/kube-proxy-7sl78"
	Dec 13 11:50:26 default-k8s-diff-port-151605 kubelet[1316]: I1213 11:50:26.996199    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa439c4d-470b-4cd1-868b-45116bb9e6f5-lib-modules\") pod \"kube-proxy-7sl78\" (UID: \"fa439c4d-470b-4cd1-868b-45116bb9e6f5\") " pod="kube-system/kube-proxy-7sl78"
	Dec 13 11:50:26 default-k8s-diff-port-151605 kubelet[1316]: I1213 11:50:26.996224    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6q25\" (UniqueName: \"kubernetes.io/projected/fa439c4d-470b-4cd1-868b-45116bb9e6f5-kube-api-access-p6q25\") pod \"kube-proxy-7sl78\" (UID: \"fa439c4d-470b-4cd1-868b-45116bb9e6f5\") " pod="kube-system/kube-proxy-7sl78"
	Dec 13 11:50:26 default-k8s-diff-port-151605 kubelet[1316]: I1213 11:50:26.996278    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa439c4d-470b-4cd1-868b-45116bb9e6f5-xtables-lock\") pod \"kube-proxy-7sl78\" (UID: \"fa439c4d-470b-4cd1-868b-45116bb9e6f5\") " pod="kube-system/kube-proxy-7sl78"
	Dec 13 11:50:27 default-k8s-diff-port-151605 kubelet[1316]: I1213 11:50:27.100750    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/dc9ca822-d910-4583-afbc-ea67e425f553-cni-cfg\") pod \"kindnet-4bq9f\" (UID: \"dc9ca822-d910-4583-afbc-ea67e425f553\") " pod="kube-system/kindnet-4bq9f"
	Dec 13 11:50:27 default-k8s-diff-port-151605 kubelet[1316]: I1213 11:50:27.100799    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4xtg\" (UniqueName: \"kubernetes.io/projected/dc9ca822-d910-4583-afbc-ea67e425f553-kube-api-access-d4xtg\") pod \"kindnet-4bq9f\" (UID: \"dc9ca822-d910-4583-afbc-ea67e425f553\") " pod="kube-system/kindnet-4bq9f"
	Dec 13 11:50:27 default-k8s-diff-port-151605 kubelet[1316]: I1213 11:50:27.100857    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc9ca822-d910-4583-afbc-ea67e425f553-xtables-lock\") pod \"kindnet-4bq9f\" (UID: \"dc9ca822-d910-4583-afbc-ea67e425f553\") " pod="kube-system/kindnet-4bq9f"
	Dec 13 11:50:27 default-k8s-diff-port-151605 kubelet[1316]: I1213 11:50:27.100891    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc9ca822-d910-4583-afbc-ea67e425f553-lib-modules\") pod \"kindnet-4bq9f\" (UID: \"dc9ca822-d910-4583-afbc-ea67e425f553\") " pod="kube-system/kindnet-4bq9f"
	Dec 13 11:50:28 default-k8s-diff-port-151605 kubelet[1316]: E1213 11:50:28.101547    1316 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Dec 13 11:50:28 default-k8s-diff-port-151605 kubelet[1316]: E1213 11:50:28.101667    1316 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fa439c4d-470b-4cd1-868b-45116bb9e6f5-kube-proxy podName:fa439c4d-470b-4cd1-868b-45116bb9e6f5 nodeName:}" failed. No retries permitted until 2025-12-13 11:50:28.601639725 +0000 UTC m=+7.553713818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/fa439c4d-470b-4cd1-868b-45116bb9e6f5-kube-proxy") pod "kube-proxy-7sl78" (UID: "fa439c4d-470b-4cd1-868b-45116bb9e6f5") : failed to sync configmap cache: timed out waiting for the condition
	Dec 13 11:50:28 default-k8s-diff-port-151605 kubelet[1316]: I1213 11:50:28.124869    1316 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 13 11:50:30 default-k8s-diff-port-151605 kubelet[1316]: I1213 11:50:30.569249    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7sl78" podStartSLOduration=4.569229103 podStartE2EDuration="4.569229103s" podCreationTimestamp="2025-12-13 11:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 11:50:29.451369894 +0000 UTC m=+8.403443987" watchObservedRunningTime="2025-12-13 11:50:30.569229103 +0000 UTC m=+9.521303196"
	Dec 13 11:50:31 default-k8s-diff-port-151605 kubelet[1316]: I1213 11:50:31.518677    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4bq9f" podStartSLOduration=2.899933202 podStartE2EDuration="5.51865901s" podCreationTimestamp="2025-12-13 11:50:26 +0000 UTC" firstStartedPulling="2025-12-13 11:50:28.233490996 +0000 UTC m=+7.185565089" lastFinishedPulling="2025-12-13 11:50:30.852216804 +0000 UTC m=+9.804290897" observedRunningTime="2025-12-13 11:50:31.484538183 +0000 UTC m=+10.436612276" watchObservedRunningTime="2025-12-13 11:50:31.51865901 +0000 UTC m=+10.470733111"
	Dec 13 11:50:41 default-k8s-diff-port-151605 kubelet[1316]: I1213 11:50:41.697706    1316 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 13 11:50:41 default-k8s-diff-port-151605 kubelet[1316]: I1213 11:50:41.807079    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9a39a61a-2538-4bf8-ab07-ccfbbd952666-tmp\") pod \"storage-provisioner\" (UID: \"9a39a61a-2538-4bf8-ab07-ccfbbd952666\") " pod="kube-system/storage-provisioner"
	Dec 13 11:50:41 default-k8s-diff-port-151605 kubelet[1316]: I1213 11:50:41.807320    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c52a25e5-9ec2-476b-bce3-7e7a2129e082-config-volume\") pod \"coredns-66bc5c9577-pr2h6\" (UID: \"c52a25e5-9ec2-476b-bce3-7e7a2129e082\") " pod="kube-system/coredns-66bc5c9577-pr2h6"
	Dec 13 11:50:41 default-k8s-diff-port-151605 kubelet[1316]: I1213 11:50:41.807433    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhnzj\" (UniqueName: \"kubernetes.io/projected/c52a25e5-9ec2-476b-bce3-7e7a2129e082-kube-api-access-lhnzj\") pod \"coredns-66bc5c9577-pr2h6\" (UID: \"c52a25e5-9ec2-476b-bce3-7e7a2129e082\") " pod="kube-system/coredns-66bc5c9577-pr2h6"
	Dec 13 11:50:41 default-k8s-diff-port-151605 kubelet[1316]: I1213 11:50:41.807596    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcsls\" (UniqueName: \"kubernetes.io/projected/9a39a61a-2538-4bf8-ab07-ccfbbd952666-kube-api-access-fcsls\") pod \"storage-provisioner\" (UID: \"9a39a61a-2538-4bf8-ab07-ccfbbd952666\") " pod="kube-system/storage-provisioner"
	Dec 13 11:50:42 default-k8s-diff-port-151605 kubelet[1316]: W1213 11:50:42.174577    1316 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d/crio-2ffe9991140e957cf3a574f33d6fbfdfdeff28f3ea4a1a5155c7ad476eed273f WatchSource:0}: Error finding container 2ffe9991140e957cf3a574f33d6fbfdfdeff28f3ea4a1a5155c7ad476eed273f: Status 404 returned error can't find the container with id 2ffe9991140e957cf3a574f33d6fbfdfdeff28f3ea4a1a5155c7ad476eed273f
	Dec 13 11:50:42 default-k8s-diff-port-151605 kubelet[1316]: I1213 11:50:42.519316    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pr2h6" podStartSLOduration=16.519296293 podStartE2EDuration="16.519296293s" podCreationTimestamp="2025-12-13 11:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 11:50:42.497762775 +0000 UTC m=+21.449836867" watchObservedRunningTime="2025-12-13 11:50:42.519296293 +0000 UTC m=+21.471370402"
	Dec 13 11:50:44 default-k8s-diff-port-151605 kubelet[1316]: I1213 11:50:44.681961    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.681922081 podStartE2EDuration="16.681922081s" podCreationTimestamp="2025-12-13 11:50:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 11:50:42.549663296 +0000 UTC m=+21.501737405" watchObservedRunningTime="2025-12-13 11:50:44.681922081 +0000 UTC m=+23.633996182"
	Dec 13 11:50:44 default-k8s-diff-port-151605 kubelet[1316]: I1213 11:50:44.731032    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz75m\" (UniqueName: \"kubernetes.io/projected/0e627c31-a482-4a58-a8f5-410ea307b7ed-kube-api-access-sz75m\") pod \"busybox\" (UID: \"0e627c31-a482-4a58-a8f5-410ea307b7ed\") " pod="default/busybox"
	Dec 13 11:50:45 default-k8s-diff-port-151605 kubelet[1316]: W1213 11:50:45.072752    1316 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d/crio-fe76732628d8a81ecab9ce02cc78b6e1b507e52ea4a19e35957c9c54f8068503 WatchSource:0}: Error finding container fe76732628d8a81ecab9ce02cc78b6e1b507e52ea4a19e35957c9c54f8068503: Status 404 returned error can't find the container with id fe76732628d8a81ecab9ce02cc78b6e1b507e52ea4a19e35957c9c54f8068503
	Dec 13 11:50:52 default-k8s-diff-port-151605 kubelet[1316]: E1213 11:50:52.827954    1316 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:47610->127.0.0.1:35953: write tcp 127.0.0.1:47610->127.0.0.1:35953: write: broken pipe
	
	
	==> storage-provisioner [9526c77709b6368e2b37fd5ae1297b48ea1cce27cd1ce2d3a5c2127335428a65] <==
	I1213 11:50:42.257454       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 11:50:42.342498       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 11:50:42.342683       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 11:50:42.365585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:50:42.374428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 11:50:42.374604       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 11:50:42.377152       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-151605_6a942da1-1505-456d-9780-ebe4f8bda41b!
	I1213 11:50:42.377292       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6471950e-eece-40e8-8a15-868fd2831bde", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-151605_6a942da1-1505-456d-9780-ebe4f8bda41b became leader
	W1213 11:50:42.388479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:50:42.403065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 11:50:42.481952       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-151605_6a942da1-1505-456d-9780-ebe4f8bda41b!
	W1213 11:50:44.406654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:50:44.432737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:50:46.436896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:50:46.444246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:50:48.447675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:50:48.452620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:50:50.456015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:50:50.463745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:50:52.467828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:50:52.472900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:50:54.479803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:50:54.486024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-151605 -n default-k8s-diff-port-151605
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-151605 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-326948 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-326948 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (346.082617ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:51:16Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-326948 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-326948 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-326948 describe deploy/metrics-server -n kube-system: exit status 1 (142.003264ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-326948 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-326948
helpers_test.go:244: (dbg) docker inspect embed-certs-326948:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14",
	        "Created": "2025-12-13T11:50:16.044997755Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 593844,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:50:16.117736061Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14/hostname",
	        "HostsPath": "/var/lib/docker/containers/4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14/hosts",
	        "LogPath": "/var/lib/docker/containers/4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14/4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14-json.log",
	        "Name": "/embed-certs-326948",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-326948:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-326948",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14",
	                "LowerDir": "/var/lib/docker/overlay2/5ad8a30cfbe144c76a0244f97d4d2c68591d89705a8a98bd566bcd8477b3dd63-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ad8a30cfbe144c76a0244f97d4d2c68591d89705a8a98bd566bcd8477b3dd63/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ad8a30cfbe144c76a0244f97d4d2c68591d89705a8a98bd566bcd8477b3dd63/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ad8a30cfbe144c76a0244f97d4d2c68591d89705a8a98bd566bcd8477b3dd63/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-326948",
	                "Source": "/var/lib/docker/volumes/embed-certs-326948/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-326948",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-326948",
	                "name.minikube.sigs.k8s.io": "embed-certs-326948",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ae703bb98371775805cfb7a82e3a8f96cb6a9751d00123f376e4bcdbafa73885",
	            "SandboxKey": "/var/run/docker/netns/ae703bb98371",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-326948": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:46:86:3b:9c:66",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5b063c432202ef9f217d4b391af56f96171f14adb917467f7393ca248725893a",
	                    "EndpointID": "23fa4b1d93208a80c0ee834870ea40c2993b6b158109436c1f25f7c9174c7685",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-326948",
	                        "4fffdfd58e00"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-326948 -n embed-certs-326948
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-326948 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-326948 logs -n 25: (1.754784528s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p kubernetes-upgrade-854588                                                                                                                                                                                                                  │ kubernetes-upgrade-854588    │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ start   │ -p cert-expiration-420007 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ delete  │ -p force-systemd-env-181508                                                                                                                                                                                                                   │ force-systemd-env-181508     │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ start   │ -p cert-options-522461 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-522461          │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:46 UTC │
	│ ssh     │ cert-options-522461 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-522461          │ jenkins │ v1.37.0 │ 13 Dec 25 11:46 UTC │ 13 Dec 25 11:47 UTC │
	│ ssh     │ -p cert-options-522461 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-522461          │ jenkins │ v1.37.0 │ 13 Dec 25 11:47 UTC │ 13 Dec 25 11:47 UTC │
	│ delete  │ -p cert-options-522461                                                                                                                                                                                                                        │ cert-options-522461          │ jenkins │ v1.37.0 │ 13 Dec 25 11:47 UTC │ 13 Dec 25 11:47 UTC │
	│ start   │ -p old-k8s-version-051699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:47 UTC │ 13 Dec 25 11:48 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-051699 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │                     │
	│ stop    │ -p old-k8s-version-051699 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │ 13 Dec 25 11:48 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-051699 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │ 13 Dec 25 11:48 UTC │
	│ start   │ -p old-k8s-version-051699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │ 13 Dec 25 11:49 UTC │
	│ image   │ old-k8s-version-051699 image list --format=json                                                                                                                                                                                               │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ pause   │ -p old-k8s-version-051699 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │                     │
	│ delete  │ -p old-k8s-version-051699                                                                                                                                                                                                                     │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ delete  │ -p old-k8s-version-051699                                                                                                                                                                                                                     │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p cert-expiration-420007 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ delete  │ -p cert-expiration-420007                                                                                                                                                                                                                     │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-151605 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-151605 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-151605 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-326948 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:51:07
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:51:07.829318  597382 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:51:07.829475  597382 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:51:07.829488  597382 out.go:374] Setting ErrFile to fd 2...
	I1213 11:51:07.829494  597382 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:51:07.829749  597382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:51:07.830116  597382 out.go:368] Setting JSON to false
	I1213 11:51:07.831067  597382 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12820,"bootTime":1765613848,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:51:07.831142  597382 start.go:143] virtualization:  
	I1213 11:51:07.834243  597382 out.go:179] * [default-k8s-diff-port-151605] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:51:07.838024  597382 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:51:07.838161  597382 notify.go:221] Checking for updates...
	I1213 11:51:07.844306  597382 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:51:07.847355  597382 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:51:07.850365  597382 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:51:07.853380  597382 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:51:07.856762  597382 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:51:07.860355  597382 config.go:182] Loaded profile config "default-k8s-diff-port-151605": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:51:07.860943  597382 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:51:07.885721  597382 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:51:07.885859  597382 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:51:07.941683  597382 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:51:07.932570049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:51:07.941785  597382 docker.go:319] overlay module found
	I1213 11:51:07.944823  597382 out.go:179] * Using the docker driver based on existing profile
	I1213 11:51:07.947476  597382 start.go:309] selected driver: docker
	I1213 11:51:07.947492  597382 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-151605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-151605 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:07.947626  597382 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:51:07.948340  597382 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:51:08.006898  597382 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:51:07.995149014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:51:08.007275  597382 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:51:08.007310  597382 cni.go:84] Creating CNI manager for ""
	I1213 11:51:08.007367  597382 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:51:08.007408  597382 start.go:353] cluster config:
	{Name:default-k8s-diff-port-151605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-151605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:08.012421  597382 out.go:179] * Starting "default-k8s-diff-port-151605" primary control-plane node in "default-k8s-diff-port-151605" cluster
	I1213 11:51:08.015317  597382 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:51:08.018296  597382 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:51:08.021231  597382 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 11:51:08.021291  597382 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 11:51:08.021301  597382 cache.go:65] Caching tarball of preloaded images
	I1213 11:51:08.021347  597382 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:51:08.021432  597382 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 11:51:08.021446  597382 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 11:51:08.021562  597382 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/config.json ...
	I1213 11:51:08.043745  597382 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:51:08.043772  597382 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:51:08.043788  597382 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:51:08.043823  597382 start.go:360] acquireMachinesLock for default-k8s-diff-port-151605: {Name:mkffa1b49c702fd2120917ed923c6be254da1808 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:51:08.043886  597382 start.go:364] duration metric: took 39.565µs to acquireMachinesLock for "default-k8s-diff-port-151605"
	I1213 11:51:08.043910  597382 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:51:08.043921  597382 fix.go:54] fixHost starting: 
	I1213 11:51:08.044185  597382 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-151605 --format={{.State.Status}}
	I1213 11:51:08.062842  597382 fix.go:112] recreateIfNeeded on default-k8s-diff-port-151605: state=Stopped err=<nil>
	W1213 11:51:08.062874  597382 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 11:51:08.066105  597382 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-151605" ...
	I1213 11:51:08.066207  597382 cli_runner.go:164] Run: docker start default-k8s-diff-port-151605
	I1213 11:51:08.348199  597382 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-151605 --format={{.State.Status}}
	I1213 11:51:08.373236  597382 kic.go:430] container "default-k8s-diff-port-151605" state is running.
	I1213 11:51:08.374234  597382 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-151605
	I1213 11:51:08.398069  597382 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/config.json ...
	I1213 11:51:08.398307  597382 machine.go:94] provisionDockerMachine start ...
	I1213 11:51:08.398370  597382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-151605
	I1213 11:51:08.424846  597382 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:08.425186  597382 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1213 11:51:08.425206  597382 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:51:08.426196  597382 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 11:51:11.579165  597382 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-151605
	
	I1213 11:51:11.579191  597382 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-151605"
	I1213 11:51:11.579266  597382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-151605
	I1213 11:51:11.600426  597382 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:11.600737  597382 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1213 11:51:11.600755  597382 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-151605 && echo "default-k8s-diff-port-151605" | sudo tee /etc/hostname
	I1213 11:51:11.760707  597382 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-151605
	
	I1213 11:51:11.760784  597382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-151605
	I1213 11:51:11.779653  597382 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:11.779991  597382 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1213 11:51:11.780014  597382 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-151605' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-151605/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-151605' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:51:11.931770  597382 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:51:11.931833  597382 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 11:51:11.931858  597382 ubuntu.go:190] setting up certificates
	I1213 11:51:11.931876  597382 provision.go:84] configureAuth start
	I1213 11:51:11.931939  597382 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-151605
	I1213 11:51:11.954306  597382 provision.go:143] copyHostCerts
	I1213 11:51:11.954394  597382 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 11:51:11.954409  597382 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 11:51:11.954486  597382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 11:51:11.954580  597382 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 11:51:11.954590  597382 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 11:51:11.954615  597382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 11:51:11.954669  597382 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 11:51:11.954677  597382 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 11:51:11.954700  597382 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 11:51:11.954752  597382 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-151605 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-151605 localhost minikube]
	I1213 11:51:12.078789  597382 provision.go:177] copyRemoteCerts
	I1213 11:51:12.078873  597382 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:51:12.078915  597382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-151605
	I1213 11:51:12.098438  597382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/default-k8s-diff-port-151605/id_rsa Username:docker}
	I1213 11:51:12.207451  597382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1213 11:51:12.226283  597382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:51:12.245343  597382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:51:12.264269  597382 provision.go:87] duration metric: took 332.369187ms to configureAuth
	I1213 11:51:12.264312  597382 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:51:12.264498  597382 config.go:182] Loaded profile config "default-k8s-diff-port-151605": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:51:12.264612  597382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-151605
	I1213 11:51:12.283050  597382 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:12.283383  597382 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1213 11:51:12.283408  597382 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:51:12.635400  597382 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:51:12.635424  597382 machine.go:97] duration metric: took 4.237102654s to provisionDockerMachine
	I1213 11:51:12.635436  597382 start.go:293] postStartSetup for "default-k8s-diff-port-151605" (driver="docker")
	I1213 11:51:12.635447  597382 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:51:12.635553  597382 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:51:12.635611  597382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-151605
	I1213 11:51:12.656418  597382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/default-k8s-diff-port-151605/id_rsa Username:docker}
	I1213 11:51:12.763565  597382 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:51:12.767246  597382 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:51:12.767277  597382 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:51:12.767288  597382 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 11:51:12.767342  597382 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 11:51:12.767424  597382 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 11:51:12.767549  597382 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:51:12.775365  597382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:51:12.793399  597382 start.go:296] duration metric: took 157.947809ms for postStartSetup
	I1213 11:51:12.793518  597382 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:51:12.793596  597382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-151605
	I1213 11:51:12.811204  597382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/default-k8s-diff-port-151605/id_rsa Username:docker}
	I1213 11:51:12.913349  597382 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:51:12.918550  597382 fix.go:56] duration metric: took 4.874620634s for fixHost
	I1213 11:51:12.918591  597382 start.go:83] releasing machines lock for "default-k8s-diff-port-151605", held for 4.874691658s
	I1213 11:51:12.918690  597382 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-151605
	I1213 11:51:12.936946  597382 ssh_runner.go:195] Run: cat /version.json
	I1213 11:51:12.937008  597382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-151605
	I1213 11:51:12.937009  597382 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:51:12.937081  597382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-151605
	I1213 11:51:12.957979  597382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/default-k8s-diff-port-151605/id_rsa Username:docker}
	I1213 11:51:12.959596  597382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/default-k8s-diff-port-151605/id_rsa Username:docker}
	I1213 11:51:13.148852  597382 ssh_runner.go:195] Run: systemctl --version
	I1213 11:51:13.160926  597382 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:51:13.199256  597382 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:51:13.203642  597382 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:51:13.203721  597382 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:51:13.211534  597382 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 11:51:13.211558  597382 start.go:496] detecting cgroup driver to use...
	I1213 11:51:13.211590  597382 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:51:13.211654  597382 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:51:13.227190  597382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:51:13.241987  597382 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:51:13.242055  597382 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:51:13.258085  597382 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:51:13.271233  597382 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:51:13.382272  597382 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:51:13.492139  597382 docker.go:234] disabling docker service ...
	I1213 11:51:13.492250  597382 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:51:13.508806  597382 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:51:13.523238  597382 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:51:13.648818  597382 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:51:13.777050  597382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:51:13.791105  597382 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:51:13.805167  597382 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 11:51:13.805276  597382 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:51:13.815077  597382 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:51:13.815191  597382 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:51:13.824674  597382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:51:13.834109  597382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:51:13.843306  597382 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:51:13.852056  597382 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:51:13.861873  597382 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:51:13.870695  597382 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:51:13.880045  597382 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:51:13.887663  597382 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:51:13.895682  597382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:14.027849  597382 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:51:14.210980  597382 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:51:14.211050  597382 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:51:14.215039  597382 start.go:564] Will wait 60s for crictl version
	I1213 11:51:14.215108  597382 ssh_runner.go:195] Run: which crictl
	I1213 11:51:14.218750  597382 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:51:14.247161  597382 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:51:14.247276  597382 ssh_runner.go:195] Run: crio --version
	I1213 11:51:14.279135  597382 ssh_runner.go:195] Run: crio --version
	I1213 11:51:14.314440  597382 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 11:51:14.317248  597382 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-151605 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:51:14.334020  597382 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 11:51:14.337934  597382 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:51:14.347663  597382 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-151605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-151605 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:51:14.347792  597382 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 11:51:14.347853  597382 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:51:14.389771  597382 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:51:14.389793  597382 crio.go:433] Images already preloaded, skipping extraction
	I1213 11:51:14.389849  597382 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:51:14.425261  597382 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:51:14.425285  597382 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:51:14.425292  597382 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.2 crio true true} ...
	I1213 11:51:14.425391  597382 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-151605 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-151605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:51:14.425474  597382 ssh_runner.go:195] Run: crio config
	I1213 11:51:14.491700  597382 cni.go:84] Creating CNI manager for ""
	I1213 11:51:14.491722  597382 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:51:14.491744  597382 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:51:14.491776  597382 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-151605 NodeName:default-k8s-diff-port-151605 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:51:14.491922  597382 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-151605"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:51:14.492008  597382 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 11:51:14.500150  597382 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:51:14.500272  597382 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:51:14.508188  597382 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1213 11:51:14.521621  597382 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:51:14.535052  597382 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1213 11:51:14.547990  597382 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:51:14.551638  597382 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:51:14.561321  597382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:14.686759  597382 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:51:14.703398  597382 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605 for IP: 192.168.85.2
	I1213 11:51:14.703421  597382 certs.go:195] generating shared ca certs ...
	I1213 11:51:14.703438  597382 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:14.703646  597382 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:51:14.703703  597382 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:51:14.703716  597382 certs.go:257] generating profile certs ...
	I1213 11:51:14.703827  597382 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.key
	I1213 11:51:14.703904  597382 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/apiserver.key.e2716210
	I1213 11:51:14.703948  597382 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/proxy-client.key
	I1213 11:51:14.704067  597382 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:51:14.704105  597382 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:51:14.704118  597382 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:51:14.704144  597382 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:51:14.704171  597382 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:51:14.704202  597382 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:51:14.704255  597382 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:51:14.704869  597382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:51:14.725876  597382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:51:14.743604  597382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:51:14.763872  597382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:51:14.782768  597382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1213 11:51:14.800374  597382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 11:51:14.819268  597382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:51:14.845881  597382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:51:14.868263  597382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:51:14.889097  597382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:51:14.910472  597382 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:51:14.932333  597382 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:51:14.950796  597382 ssh_runner.go:195] Run: openssl version
	I1213 11:51:14.957190  597382 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:51:14.965268  597382 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:51:14.973678  597382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:51:14.977954  597382 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:51:14.978058  597382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:51:15.027505  597382 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:51:15.040319  597382 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:15.049068  597382 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:51:15.058693  597382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:15.063120  597382 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:15.063240  597382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:15.105681  597382 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:51:15.113612  597382 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:51:15.121442  597382 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:51:15.130926  597382 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:51:15.135192  597382 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:51:15.135290  597382 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:51:15.178066  597382 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:51:15.185762  597382 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:51:15.189805  597382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:51:15.233632  597382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:51:15.274871  597382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:51:15.315741  597382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:51:15.358310  597382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:51:15.416372  597382 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:51:15.491625  597382 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-151605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-151605 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:15.491716  597382 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:51:15.491834  597382 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:51:15.561584  597382 cri.go:89] found id: "cbd9d49b05b8a5dd0dc77bf63238bdf30ee239621287d026e486c91a38c69194"
	I1213 11:51:15.561609  597382 cri.go:89] found id: "41f26b68d203d9d83d81376bab5feea3fb613ac275331c49aa37fbebfa938c29"
	I1213 11:51:15.561615  597382 cri.go:89] found id: "c6a26bd3f3f3a9aadd06af1e7019a9a4ad95fe27fc8cd6cd2866891c0293ac91"
	I1213 11:51:15.561628  597382 cri.go:89] found id: "54cffecfcbe7d79dd9b85c2aea28df92440fb375b7e38669ef73479908f14bd0"
	I1213 11:51:15.561631  597382 cri.go:89] found id: ""
	I1213 11:51:15.561722  597382 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 11:51:15.591398  597382 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:51:15Z" level=error msg="open /run/runc: no such file or directory"
	I1213 11:51:15.591571  597382 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:51:15.605401  597382 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 11:51:15.605479  597382 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 11:51:15.605565  597382 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:51:15.615826  597382 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:51:15.616780  597382 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-151605" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:51:15.617366  597382 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-354468/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-151605" cluster setting kubeconfig missing "default-k8s-diff-port-151605" context setting]
	I1213 11:51:15.618200  597382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:15.620021  597382 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:51:15.636913  597382 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 11:51:15.636992  597382 kubeadm.go:602] duration metric: took 31.493243ms to restartPrimaryControlPlane
	I1213 11:51:15.637018  597382 kubeadm.go:403] duration metric: took 145.406432ms to StartCluster
	I1213 11:51:15.637061  597382 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:15.637141  597382 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:51:15.638678  597382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:15.638988  597382 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:51:15.639341  597382 config.go:182] Loaded profile config "default-k8s-diff-port-151605": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:51:15.639481  597382 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:51:15.639790  597382 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-151605"
	I1213 11:51:15.639818  597382 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-151605"
	W1213 11:51:15.639837  597382 addons.go:248] addon storage-provisioner should already be in state true
	I1213 11:51:15.639895  597382 host.go:66] Checking if "default-k8s-diff-port-151605" exists ...
	I1213 11:51:15.640665  597382 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-151605 --format={{.State.Status}}
	I1213 11:51:15.640878  597382 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-151605"
	I1213 11:51:15.640916  597382 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-151605"
	W1213 11:51:15.640942  597382 addons.go:248] addon dashboard should already be in state true
	I1213 11:51:15.641043  597382 host.go:66] Checking if "default-k8s-diff-port-151605" exists ...
	I1213 11:51:15.641185  597382 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-151605"
	I1213 11:51:15.641202  597382 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-151605"
	I1213 11:51:15.641464  597382 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-151605 --format={{.State.Status}}
	I1213 11:51:15.641973  597382 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-151605 --format={{.State.Status}}
	I1213 11:51:15.643647  597382 out.go:179] * Verifying Kubernetes components...
	I1213 11:51:15.651649  597382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:15.716379  597382 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:51:15.719418  597382 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 11:51:15.722494  597382 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Dec 13 11:51:04 embed-certs-326948 crio[841]: time="2025-12-13T11:51:04.143002423Z" level=info msg="Created container 1c6445cf6444f4d56a3a1002e41fa6fec15920a3cbcbe79e1026c7a7b36f7863: kube-system/coredns-66bc5c9577-459p2/coredns" id=289938b3-73bb-425e-8762-3042e68f75a7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:51:04 embed-certs-326948 crio[841]: time="2025-12-13T11:51:04.143751712Z" level=info msg="Starting container: 1c6445cf6444f4d56a3a1002e41fa6fec15920a3cbcbe79e1026c7a7b36f7863" id=d50af7f2-1b15-486b-be23-18b438122a9a name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:51:04 embed-certs-326948 crio[841]: time="2025-12-13T11:51:04.153630408Z" level=info msg="Started container" PID=1780 containerID=1c6445cf6444f4d56a3a1002e41fa6fec15920a3cbcbe79e1026c7a7b36f7863 description=kube-system/coredns-66bc5c9577-459p2/coredns id=d50af7f2-1b15-486b-be23-18b438122a9a name=/runtime.v1.RuntimeService/StartContainer sandboxID=eb8626751d17c970ff2e7084ec35ad6914ae8a58f0caadb1cdbc0b09b9f4c8fd
	Dec 13 11:51:07 embed-certs-326948 crio[841]: time="2025-12-13T11:51:07.287157566Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9caab081-905c-4c71-9bee-7bc780d31406 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 11:51:07 embed-certs-326948 crio[841]: time="2025-12-13T11:51:07.287252229Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:51:07 embed-certs-326948 crio[841]: time="2025-12-13T11:51:07.294975381Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:35ed639d3bb29d0b2c13123143754786c991448d65161d7874c447a67d76ac2d UID:27e51c4b-ab88-4f0c-a4c9-d056eb521aca NetNS:/var/run/netns/7cf4f98f-162c-471d-9a51-8879df9a6a13 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001fbc7f8}] Aliases:map[]}"
	Dec 13 11:51:07 embed-certs-326948 crio[841]: time="2025-12-13T11:51:07.295141134Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 13 11:51:07 embed-certs-326948 crio[841]: time="2025-12-13T11:51:07.310708061Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:35ed639d3bb29d0b2c13123143754786c991448d65161d7874c447a67d76ac2d UID:27e51c4b-ab88-4f0c-a4c9-d056eb521aca NetNS:/var/run/netns/7cf4f98f-162c-471d-9a51-8879df9a6a13 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001fbc7f8}] Aliases:map[]}"
	Dec 13 11:51:07 embed-certs-326948 crio[841]: time="2025-12-13T11:51:07.311189919Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 13 11:51:07 embed-certs-326948 crio[841]: time="2025-12-13T11:51:07.317908838Z" level=info msg="Ran pod sandbox 35ed639d3bb29d0b2c13123143754786c991448d65161d7874c447a67d76ac2d with infra container: default/busybox/POD" id=9caab081-905c-4c71-9bee-7bc780d31406 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 11:51:07 embed-certs-326948 crio[841]: time="2025-12-13T11:51:07.319363035Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7fbf449c-2e6d-4f77-9e53-f3a4361a709c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:51:07 embed-certs-326948 crio[841]: time="2025-12-13T11:51:07.319896553Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=7fbf449c-2e6d-4f77-9e53-f3a4361a709c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:51:07 embed-certs-326948 crio[841]: time="2025-12-13T11:51:07.32019043Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=7fbf449c-2e6d-4f77-9e53-f3a4361a709c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:51:07 embed-certs-326948 crio[841]: time="2025-12-13T11:51:07.326752432Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=be64e990-1766-4ffb-aa5c-e8266cfa76c6 name=/runtime.v1.ImageService/PullImage
	Dec 13 11:51:07 embed-certs-326948 crio[841]: time="2025-12-13T11:51:07.330720955Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 13 11:51:09 embed-certs-326948 crio[841]: time="2025-12-13T11:51:09.384066097Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=be64e990-1766-4ffb-aa5c-e8266cfa76c6 name=/runtime.v1.ImageService/PullImage
	Dec 13 11:51:09 embed-certs-326948 crio[841]: time="2025-12-13T11:51:09.385037Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=085dae0d-4b43-451c-b40e-908164245577 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:51:09 embed-certs-326948 crio[841]: time="2025-12-13T11:51:09.386775147Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=616949df-d0f5-40f4-be15-ca7c0871b109 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:51:09 embed-certs-326948 crio[841]: time="2025-12-13T11:51:09.392899755Z" level=info msg="Creating container: default/busybox/busybox" id=0183c0f6-87f6-415e-adbc-18558ee6b72b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:51:09 embed-certs-326948 crio[841]: time="2025-12-13T11:51:09.393040466Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:51:09 embed-certs-326948 crio[841]: time="2025-12-13T11:51:09.397921676Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:51:09 embed-certs-326948 crio[841]: time="2025-12-13T11:51:09.398460388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:51:09 embed-certs-326948 crio[841]: time="2025-12-13T11:51:09.413224659Z" level=info msg="Created container 0923fb52058f22df7d38181c214bf8a7b4533c09b43773af60bb01b28486acae: default/busybox/busybox" id=0183c0f6-87f6-415e-adbc-18558ee6b72b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:51:09 embed-certs-326948 crio[841]: time="2025-12-13T11:51:09.414220416Z" level=info msg="Starting container: 0923fb52058f22df7d38181c214bf8a7b4533c09b43773af60bb01b28486acae" id=1fa75d93-9a72-4690-89cf-998f3e517823 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:51:09 embed-certs-326948 crio[841]: time="2025-12-13T11:51:09.415972897Z" level=info msg="Started container" PID=1844 containerID=0923fb52058f22df7d38181c214bf8a7b4533c09b43773af60bb01b28486acae description=default/busybox/busybox id=1fa75d93-9a72-4690-89cf-998f3e517823 name=/runtime.v1.RuntimeService/StartContainer sandboxID=35ed639d3bb29d0b2c13123143754786c991448d65161d7874c447a67d76ac2d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	0923fb52058f2       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   35ed639d3bb29       busybox                                      default
	1c6445cf6444f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago      Running             coredns                   0                   eb8626751d17c       coredns-66bc5c9577-459p2                     kube-system
	7c13e9280d0d9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago      Running             storage-provisioner       0                   16aa5c0fdb4f3       storage-provisioner                          kube-system
	7f621d853c285       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    25 seconds ago      Running             kindnet-cni               0                   e26a219dee825       kindnet-q82mh                                kube-system
	d1bf887d6b97a       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                      27 seconds ago      Running             kube-proxy                0                   13dcd69f3ca4d       kube-proxy-5thrz                             kube-system
	b46f4a3625b7f       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                      41 seconds ago      Running             kube-scheduler            0                   3ee4cf4c5a4c6       kube-scheduler-embed-certs-326948            kube-system
	43772ab2039e6       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                      41 seconds ago      Running             etcd                      0                   407596b7377c2       etcd-embed-certs-326948                      kube-system
	5d893a09638f3       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                      41 seconds ago      Running             kube-apiserver            0                   5200ff25f145c       kube-apiserver-embed-certs-326948            kube-system
	ab59116b2328f       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                      41 seconds ago      Running             kube-controller-manager   0                   b828488370d72       kube-controller-manager-embed-certs-326948   kube-system
	
	
	==> coredns [1c6445cf6444f4d56a3a1002e41fa6fec15920a3cbcbe79e1026c7a7b36f7863] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48081 - 20958 "HINFO IN 4190405817364991029.6476384025045852152. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009479464s
	
	
	==> describe nodes <==
	Name:               embed-certs-326948
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-326948
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
	                    minikube.k8s.io/name=embed-certs-326948
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T11_50_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 11:50:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-326948
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 11:51:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 11:51:15 +0000   Sat, 13 Dec 2025 11:50:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 11:51:15 +0000   Sat, 13 Dec 2025 11:50:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 11:51:15 +0000   Sat, 13 Dec 2025 11:50:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 11:51:15 +0000   Sat, 13 Dec 2025 11:51:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-326948
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                649dcd43-7d72-42de-9a4b-6b3667428bbb
	  Boot ID:                    9bd24839-35d9-4392-a0e0-b2e0b9823eaa
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-459p2                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-embed-certs-326948                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-q82mh                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-embed-certs-326948             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-embed-certs-326948    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-5thrz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-embed-certs-326948             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Warning  CgroupV1                 42s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  42s (x8 over 42s)  kubelet          Node embed-certs-326948 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    42s (x8 over 42s)  kubelet          Node embed-certs-326948 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     42s (x8 over 42s)  kubelet          Node embed-certs-326948 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node embed-certs-326948 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node embed-certs-326948 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s                kubelet          Node embed-certs-326948 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           29s                node-controller  Node embed-certs-326948 event: Registered Node embed-certs-326948 in Controller
	  Normal   NodeReady                15s                kubelet          Node embed-certs-326948 status is now: NodeReady
	
	
	==> dmesg <==
	[ +35.182226] overlayfs: idmapped layers are currently not supported
	[Dec13 11:21] overlayfs: idmapped layers are currently not supported
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	[Dec13 11:46] overlayfs: idmapped layers are currently not supported
	[ +24.639766] overlayfs: idmapped layers are currently not supported
	[ +18.732422] overlayfs: idmapped layers are currently not supported
	[Dec13 11:47] overlayfs: idmapped layers are currently not supported
	[Dec13 11:48] overlayfs: idmapped layers are currently not supported
	[Dec13 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.618483] overlayfs: idmapped layers are currently not supported
	[Dec13 11:51] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [43772ab2039e6b21cb88adabb58787fa684efd8ef46e5016488148c6f1dec774] <==
	{"level":"warn","ts":"2025-12-13T11:50:40.576641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:40.607784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:40.620964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:40.640641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:40.659746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:40.678624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:40.696547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:40.714132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:40.731093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:40.748170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:40.764986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:40.786232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:40.802521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:40.827376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:40.844387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:40.868786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:40.896741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:40.910800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:40.933280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:40.954248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:40.975795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:41.019956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:41.031876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:41.056961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:50:41.127304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37246","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:51:18 up  3:33,  0 user,  load average: 3.39, 2.74, 2.27
	Linux embed-certs-326948 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7f621d853c28597e018bbd649729acc2eb3960fe7dcfe0fd9a7ed573b0cfcf1c] <==
	I1213 11:50:53.024615       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 11:50:53.024851       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1213 11:50:53.024971       1 main.go:148] setting mtu 1500 for CNI 
	I1213 11:50:53.024991       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 11:50:53.025007       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T11:50:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 11:50:53.320767       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 11:50:53.320865       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 11:50:53.320900       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 11:50:53.322110       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 11:50:53.521184       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 11:50:53.521295       1 metrics.go:72] Registering metrics
	I1213 11:50:53.521404       1 controller.go:711] "Syncing nftables rules"
	I1213 11:51:03.327817       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 11:51:03.327857       1 main.go:301] handling current node
	I1213 11:51:13.321542       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 11:51:13.321593       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5d893a09638f3eaabab49fd0ec2016c80edd4f018f4241ba0abe3526b2d24af7] <==
	I1213 11:50:42.070723       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1213 11:50:42.071904       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 11:50:42.093854       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 11:50:42.167131       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 11:50:42.222438       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1213 11:50:42.301874       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 11:50:42.301997       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 11:50:42.730245       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1213 11:50:42.740831       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1213 11:50:42.740923       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 11:50:43.556571       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 11:50:43.615005       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 11:50:43.682498       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1213 11:50:43.691582       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1213 11:50:43.693113       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 11:50:43.698548       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 11:50:43.968367       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 11:50:44.859357       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 11:50:44.908242       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1213 11:50:44.922093       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 11:50:49.776872       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 11:50:49.781915       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 11:50:49.922150       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 11:50:50.097977       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1213 11:51:16.206772       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:59286: use of closed network connection
	
	
	==> kube-controller-manager [ab59116b2328f24c91836af7590dc45257092ca8110cd6070c5b926cf55db554] <==
	I1213 11:50:48.995681       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 11:50:49.007088       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 11:50:49.014275       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 11:50:49.014388       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1213 11:50:49.014583       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1213 11:50:49.014676       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 11:50:49.014684       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 11:50:49.014690       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 11:50:49.017865       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 11:50:49.018328       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 11:50:49.018583       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 11:50:49.019090       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 11:50:49.019115       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1213 11:50:49.019195       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 11:50:49.019293       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 11:50:49.021486       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1213 11:50:49.021873       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 11:50:49.024026       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1213 11:50:49.024110       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 11:50:49.024809       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1213 11:50:49.027592       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-326948"
	I1213 11:50:49.027724       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 11:50:49.027797       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1213 11:50:49.029047       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 11:51:04.030276       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d1bf887d6b97a2b840f61c799d9d71c0e78dff4afd38491172ab85ead976c771] <==
	I1213 11:50:50.732574       1 server_linux.go:53] "Using iptables proxy"
	I1213 11:50:50.833161       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 11:50:50.934374       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 11:50:50.934405       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1213 11:50:50.934467       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 11:50:50.984962       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 11:50:50.985013       1 server_linux.go:132] "Using iptables Proxier"
	I1213 11:50:50.994598       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 11:50:50.996942       1 server.go:527] "Version info" version="v1.34.2"
	I1213 11:50:50.996968       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:50:51.000586       1 config.go:200] "Starting service config controller"
	I1213 11:50:51.000609       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 11:50:51.000630       1 config.go:106] "Starting endpoint slice config controller"
	I1213 11:50:51.000634       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 11:50:51.000645       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 11:50:51.000649       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 11:50:51.001680       1 config.go:309] "Starting node config controller"
	I1213 11:50:51.001700       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 11:50:51.001708       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 11:50:51.103452       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 11:50:51.103461       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 11:50:51.103480       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b46f4a3625b7f07fa7bd0b5389ddef9ced53f9a0ab114146bff45cc31498f73d] <==
	E1213 11:50:42.218895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 11:50:42.225458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 11:50:42.225646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 11:50:42.228484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 11:50:42.228790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 11:50:42.232745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 11:50:42.234770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 11:50:42.234967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 11:50:42.235032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 11:50:42.235091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 11:50:42.235130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 11:50:42.235176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 11:50:42.235224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 11:50:42.235270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 11:50:42.240007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 11:50:42.247839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 11:50:42.248034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 11:50:43.066288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 11:50:43.100802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 11:50:43.131751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 11:50:43.168409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 11:50:43.195451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 11:50:43.200005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 11:50:43.566666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1213 11:50:46.118973       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 11:50:48 embed-certs-326948 kubelet[1325]: I1213 11:50:48.990999    1325 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 13 11:50:48 embed-certs-326948 kubelet[1325]: I1213 11:50:48.991862    1325 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 13 11:50:50 embed-certs-326948 kubelet[1325]: E1213 11:50:50.275241    1325 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-q82mh\" is forbidden: User \"system:node:embed-certs-326948\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-326948' and this object" podUID="2861cef6-0bd3-400e-ad74-ce89a58a69eb" pod="kube-system/kindnet-q82mh"
	Dec 13 11:50:50 embed-certs-326948 kubelet[1325]: I1213 11:50:50.296000    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2861cef6-0bd3-400e-ad74-ce89a58a69eb-xtables-lock\") pod \"kindnet-q82mh\" (UID: \"2861cef6-0bd3-400e-ad74-ce89a58a69eb\") " pod="kube-system/kindnet-q82mh"
	Dec 13 11:50:50 embed-certs-326948 kubelet[1325]: I1213 11:50:50.296061    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2861cef6-0bd3-400e-ad74-ce89a58a69eb-cni-cfg\") pod \"kindnet-q82mh\" (UID: \"2861cef6-0bd3-400e-ad74-ce89a58a69eb\") " pod="kube-system/kindnet-q82mh"
	Dec 13 11:50:50 embed-certs-326948 kubelet[1325]: I1213 11:50:50.296173    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2861cef6-0bd3-400e-ad74-ce89a58a69eb-lib-modules\") pod \"kindnet-q82mh\" (UID: \"2861cef6-0bd3-400e-ad74-ce89a58a69eb\") " pod="kube-system/kindnet-q82mh"
	Dec 13 11:50:50 embed-certs-326948 kubelet[1325]: I1213 11:50:50.296214    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kns9\" (UniqueName: \"kubernetes.io/projected/2861cef6-0bd3-400e-ad74-ce89a58a69eb-kube-api-access-7kns9\") pod \"kindnet-q82mh\" (UID: \"2861cef6-0bd3-400e-ad74-ce89a58a69eb\") " pod="kube-system/kindnet-q82mh"
	Dec 13 11:50:50 embed-certs-326948 kubelet[1325]: I1213 11:50:50.401636    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b6f2714d-7089-4d6e-94ae-c0ec1ed42a46-kube-proxy\") pod \"kube-proxy-5thrz\" (UID: \"b6f2714d-7089-4d6e-94ae-c0ec1ed42a46\") " pod="kube-system/kube-proxy-5thrz"
	Dec 13 11:50:50 embed-certs-326948 kubelet[1325]: I1213 11:50:50.401691    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6f2714d-7089-4d6e-94ae-c0ec1ed42a46-xtables-lock\") pod \"kube-proxy-5thrz\" (UID: \"b6f2714d-7089-4d6e-94ae-c0ec1ed42a46\") " pod="kube-system/kube-proxy-5thrz"
	Dec 13 11:50:50 embed-certs-326948 kubelet[1325]: I1213 11:50:50.401723    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6f2714d-7089-4d6e-94ae-c0ec1ed42a46-lib-modules\") pod \"kube-proxy-5thrz\" (UID: \"b6f2714d-7089-4d6e-94ae-c0ec1ed42a46\") " pod="kube-system/kube-proxy-5thrz"
	Dec 13 11:50:50 embed-certs-326948 kubelet[1325]: I1213 11:50:50.401745    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n59vd\" (UniqueName: \"kubernetes.io/projected/b6f2714d-7089-4d6e-94ae-c0ec1ed42a46-kube-api-access-n59vd\") pod \"kube-proxy-5thrz\" (UID: \"b6f2714d-7089-4d6e-94ae-c0ec1ed42a46\") " pod="kube-system/kube-proxy-5thrz"
	Dec 13 11:50:50 embed-certs-326948 kubelet[1325]: I1213 11:50:50.486001    1325 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 13 11:50:50 embed-certs-326948 kubelet[1325]: W1213 11:50:50.568745    1325 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14/crio-e26a219dee82566109e495f85122235cfc060aae196f7e1d6054ac7e20e2c6a5 WatchSource:0}: Error finding container e26a219dee82566109e495f85122235cfc060aae196f7e1d6054ac7e20e2c6a5: Status 404 returned error can't find the container with id e26a219dee82566109e495f85122235cfc060aae196f7e1d6054ac7e20e2c6a5
	Dec 13 11:50:50 embed-certs-326948 kubelet[1325]: W1213 11:50:50.600123    1325 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14/crio-13dcd69f3ca4d3423d31ef649c012bf04905448f8c65f79b74721794d483fb9a WatchSource:0}: Error finding container 13dcd69f3ca4d3423d31ef649c012bf04905448f8c65f79b74721794d483fb9a: Status 404 returned error can't find the container with id 13dcd69f3ca4d3423d31ef649c012bf04905448f8c65f79b74721794d483fb9a
	Dec 13 11:50:52 embed-certs-326948 kubelet[1325]: I1213 11:50:52.031830    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5thrz" podStartSLOduration=2.031809453 podStartE2EDuration="2.031809453s" podCreationTimestamp="2025-12-13 11:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 11:50:51.15465011 +0000 UTC m=+6.392669794" watchObservedRunningTime="2025-12-13 11:50:52.031809453 +0000 UTC m=+7.269829111"
	Dec 13 11:50:53 embed-certs-326948 kubelet[1325]: I1213 11:50:53.196111    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-q82mh" podStartSLOduration=0.890954038 podStartE2EDuration="3.196095007s" podCreationTimestamp="2025-12-13 11:50:50 +0000 UTC" firstStartedPulling="2025-12-13 11:50:50.576839681 +0000 UTC m=+5.814859348" lastFinishedPulling="2025-12-13 11:50:52.88198065 +0000 UTC m=+8.120000317" observedRunningTime="2025-12-13 11:50:53.172489817 +0000 UTC m=+8.410509517" watchObservedRunningTime="2025-12-13 11:50:53.196095007 +0000 UTC m=+8.434114666"
	Dec 13 11:51:03 embed-certs-326948 kubelet[1325]: I1213 11:51:03.723111    1325 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 13 11:51:03 embed-certs-326948 kubelet[1325]: I1213 11:51:03.927686    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7h7x\" (UniqueName: \"kubernetes.io/projected/4cddf997-bcc1-4a6d-accf-779a6c4d1557-kube-api-access-x7h7x\") pod \"storage-provisioner\" (UID: \"4cddf997-bcc1-4a6d-accf-779a6c4d1557\") " pod="kube-system/storage-provisioner"
	Dec 13 11:51:03 embed-certs-326948 kubelet[1325]: I1213 11:51:03.927743    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4cddf997-bcc1-4a6d-accf-779a6c4d1557-tmp\") pod \"storage-provisioner\" (UID: \"4cddf997-bcc1-4a6d-accf-779a6c4d1557\") " pod="kube-system/storage-provisioner"
	Dec 13 11:51:03 embed-certs-326948 kubelet[1325]: I1213 11:51:03.927771    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hpbd\" (UniqueName: \"kubernetes.io/projected/3b2fae9b-e0bd-4506-84c9-4385a6c2997c-kube-api-access-2hpbd\") pod \"coredns-66bc5c9577-459p2\" (UID: \"3b2fae9b-e0bd-4506-84c9-4385a6c2997c\") " pod="kube-system/coredns-66bc5c9577-459p2"
	Dec 13 11:51:03 embed-certs-326948 kubelet[1325]: I1213 11:51:03.927793    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b2fae9b-e0bd-4506-84c9-4385a6c2997c-config-volume\") pod \"coredns-66bc5c9577-459p2\" (UID: \"3b2fae9b-e0bd-4506-84c9-4385a6c2997c\") " pod="kube-system/coredns-66bc5c9577-459p2"
	Dec 13 11:51:04 embed-certs-326948 kubelet[1325]: I1213 11:51:04.254694    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.254675041 podStartE2EDuration="13.254675041s" podCreationTimestamp="2025-12-13 11:50:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 11:51:04.224544927 +0000 UTC m=+19.462564602" watchObservedRunningTime="2025-12-13 11:51:04.254675041 +0000 UTC m=+19.492694708"
	Dec 13 11:51:05 embed-certs-326948 kubelet[1325]: I1213 11:51:05.239634    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-459p2" podStartSLOduration=15.239600799 podStartE2EDuration="15.239600799s" podCreationTimestamp="2025-12-13 11:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 11:51:04.255556894 +0000 UTC m=+19.493576577" watchObservedRunningTime="2025-12-13 11:51:05.239600799 +0000 UTC m=+20.477620474"
	Dec 13 11:51:07 embed-certs-326948 kubelet[1325]: I1213 11:51:07.153316    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrt9z\" (UniqueName: \"kubernetes.io/projected/27e51c4b-ab88-4f0c-a4c9-d056eb521aca-kube-api-access-lrt9z\") pod \"busybox\" (UID: \"27e51c4b-ab88-4f0c-a4c9-d056eb521aca\") " pod="default/busybox"
	Dec 13 11:51:07 embed-certs-326948 kubelet[1325]: W1213 11:51:07.314782    1325 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14/crio-35ed639d3bb29d0b2c13123143754786c991448d65161d7874c447a67d76ac2d WatchSource:0}: Error finding container 35ed639d3bb29d0b2c13123143754786c991448d65161d7874c447a67d76ac2d: Status 404 returned error can't find the container with id 35ed639d3bb29d0b2c13123143754786c991448d65161d7874c447a67d76ac2d
	
	
	==> storage-provisioner [7c13e9280d0d9fab0d700d74b50d86b9e313d856d61677d15bb624919db40dd6] <==
	I1213 11:51:04.196436       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 11:51:04.293657       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 11:51:04.293779       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 11:51:04.297922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:51:04.327792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 11:51:04.347839       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 11:51:04.348049       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-326948_f3e4ca47-afc5-47c0-85b3-0d4c3ece9695!
	I1213 11:51:04.363490       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"23390c62-23fe-4c67-a69c-5011159a5f31", APIVersion:"v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-326948_f3e4ca47-afc5-47c0-85b3-0d4c3ece9695 became leader
	W1213 11:51:04.364128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:51:04.375351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 11:51:04.448767       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-326948_f3e4ca47-afc5-47c0-85b3-0d4c3ece9695!
	W1213 11:51:06.378741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:51:06.383579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:51:08.386507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:51:08.397040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:51:10.400739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:51:10.405314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:51:12.408094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:51:12.415135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:51:14.418943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:51:14.424352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:51:16.427324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:51:16.433146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:51:18.452676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:51:18.470449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-326948 -n embed-certs-326948
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-326948 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-151605 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-151605 --alsologtostderr -v=1: exit status 80 (1.975919858s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-151605 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:52:12.149588  602433 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:52:12.149775  602433 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:12.149787  602433 out.go:374] Setting ErrFile to fd 2...
	I1213 11:52:12.149792  602433 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:12.150075  602433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:52:12.150368  602433 out.go:368] Setting JSON to false
	I1213 11:52:12.150399  602433 mustload.go:66] Loading cluster: default-k8s-diff-port-151605
	I1213 11:52:12.150847  602433 config.go:182] Loaded profile config "default-k8s-diff-port-151605": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:52:12.151399  602433 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-151605 --format={{.State.Status}}
	I1213 11:52:12.169218  602433 host.go:66] Checking if "default-k8s-diff-port-151605" exists ...
	I1213 11:52:12.169619  602433 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:52:12.233817  602433 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 11:52:12.224463114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:52:12.234524  602433 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765481609-22101/minikube-v1.37.0-1765481609-22101-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765481609-22101-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-151605 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1213 11:52:12.238025  602433 out.go:179] * Pausing node default-k8s-diff-port-151605 ... 
	I1213 11:52:12.241078  602433 host.go:66] Checking if "default-k8s-diff-port-151605" exists ...
	I1213 11:52:12.241473  602433 ssh_runner.go:195] Run: systemctl --version
	I1213 11:52:12.241537  602433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-151605
	I1213 11:52:12.258918  602433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/default-k8s-diff-port-151605/id_rsa Username:docker}
	I1213 11:52:12.370731  602433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:52:12.391013  602433 pause.go:52] kubelet running: true
	I1213 11:52:12.391094  602433 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 11:52:12.666748  602433 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 11:52:12.666852  602433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 11:52:12.738126  602433 cri.go:89] found id: "8674225d875f75c6ce5f382b8e3fdc88bd212b7abc69bd7a39f03cdf100ec6fc"
	I1213 11:52:12.738147  602433 cri.go:89] found id: "36769a73ca236b1b9aa92ad718f5b335f85d3c1cb3912e1f3fbf541b2764e758"
	I1213 11:52:12.738152  602433 cri.go:89] found id: "73e76e8c891b9ea190a1e89188926f8eb848c83067c164ab1b20f2f773b8aaff"
	I1213 11:52:12.738156  602433 cri.go:89] found id: "fba9365f141d5c048e73de3c4b23b2c1a27c25daee983fc11dd819f2303586c1"
	I1213 11:52:12.738160  602433 cri.go:89] found id: "0f741173607eb6e99619529190004990e5a1a175b044f55053251c961fb0bcdc"
	I1213 11:52:12.738163  602433 cri.go:89] found id: "cbd9d49b05b8a5dd0dc77bf63238bdf30ee239621287d026e486c91a38c69194"
	I1213 11:52:12.738166  602433 cri.go:89] found id: "41f26b68d203d9d83d81376bab5feea3fb613ac275331c49aa37fbebfa938c29"
	I1213 11:52:12.738169  602433 cri.go:89] found id: "c6a26bd3f3f3a9aadd06af1e7019a9a4ad95fe27fc8cd6cd2866891c0293ac91"
	I1213 11:52:12.738172  602433 cri.go:89] found id: "54cffecfcbe7d79dd9b85c2aea28df92440fb375b7e38669ef73479908f14bd0"
	I1213 11:52:12.738179  602433 cri.go:89] found id: "bfa9d999fddf9ef31c4e35a493c5e2700f9f33c2f1d8e506c0d15b86f7760d06"
	I1213 11:52:12.738183  602433 cri.go:89] found id: "135099d7b9d603df26fd1321cf6285c1801fdd808e45650834598b050c12ba25"
	I1213 11:52:12.738185  602433 cri.go:89] found id: ""
	I1213 11:52:12.738244  602433 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 11:52:12.749110  602433 retry.go:31] will retry after 183.48918ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:52:12Z" level=error msg="open /run/runc: no such file or directory"
	I1213 11:52:12.933613  602433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:52:12.948602  602433 pause.go:52] kubelet running: false
	I1213 11:52:12.948720  602433 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 11:52:13.130828  602433 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 11:52:13.130913  602433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 11:52:13.212430  602433 cri.go:89] found id: "8674225d875f75c6ce5f382b8e3fdc88bd212b7abc69bd7a39f03cdf100ec6fc"
	I1213 11:52:13.212466  602433 cri.go:89] found id: "36769a73ca236b1b9aa92ad718f5b335f85d3c1cb3912e1f3fbf541b2764e758"
	I1213 11:52:13.212471  602433 cri.go:89] found id: "73e76e8c891b9ea190a1e89188926f8eb848c83067c164ab1b20f2f773b8aaff"
	I1213 11:52:13.212476  602433 cri.go:89] found id: "fba9365f141d5c048e73de3c4b23b2c1a27c25daee983fc11dd819f2303586c1"
	I1213 11:52:13.212479  602433 cri.go:89] found id: "0f741173607eb6e99619529190004990e5a1a175b044f55053251c961fb0bcdc"
	I1213 11:52:13.212483  602433 cri.go:89] found id: "cbd9d49b05b8a5dd0dc77bf63238bdf30ee239621287d026e486c91a38c69194"
	I1213 11:52:13.212486  602433 cri.go:89] found id: "41f26b68d203d9d83d81376bab5feea3fb613ac275331c49aa37fbebfa938c29"
	I1213 11:52:13.212489  602433 cri.go:89] found id: "c6a26bd3f3f3a9aadd06af1e7019a9a4ad95fe27fc8cd6cd2866891c0293ac91"
	I1213 11:52:13.212493  602433 cri.go:89] found id: "54cffecfcbe7d79dd9b85c2aea28df92440fb375b7e38669ef73479908f14bd0"
	I1213 11:52:13.212523  602433 cri.go:89] found id: "bfa9d999fddf9ef31c4e35a493c5e2700f9f33c2f1d8e506c0d15b86f7760d06"
	I1213 11:52:13.212533  602433 cri.go:89] found id: "135099d7b9d603df26fd1321cf6285c1801fdd808e45650834598b050c12ba25"
	I1213 11:52:13.212537  602433 cri.go:89] found id: ""
	I1213 11:52:13.212585  602433 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 11:52:13.225168  602433 retry.go:31] will retry after 546.120401ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:52:13Z" level=error msg="open /run/runc: no such file or directory"
	I1213 11:52:13.771604  602433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:52:13.785683  602433 pause.go:52] kubelet running: false
	I1213 11:52:13.785760  602433 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 11:52:13.960216  602433 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 11:52:13.960304  602433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 11:52:14.039634  602433 cri.go:89] found id: "8674225d875f75c6ce5f382b8e3fdc88bd212b7abc69bd7a39f03cdf100ec6fc"
	I1213 11:52:14.039658  602433 cri.go:89] found id: "36769a73ca236b1b9aa92ad718f5b335f85d3c1cb3912e1f3fbf541b2764e758"
	I1213 11:52:14.039663  602433 cri.go:89] found id: "73e76e8c891b9ea190a1e89188926f8eb848c83067c164ab1b20f2f773b8aaff"
	I1213 11:52:14.039667  602433 cri.go:89] found id: "fba9365f141d5c048e73de3c4b23b2c1a27c25daee983fc11dd819f2303586c1"
	I1213 11:52:14.039670  602433 cri.go:89] found id: "0f741173607eb6e99619529190004990e5a1a175b044f55053251c961fb0bcdc"
	I1213 11:52:14.039674  602433 cri.go:89] found id: "cbd9d49b05b8a5dd0dc77bf63238bdf30ee239621287d026e486c91a38c69194"
	I1213 11:52:14.039677  602433 cri.go:89] found id: "41f26b68d203d9d83d81376bab5feea3fb613ac275331c49aa37fbebfa938c29"
	I1213 11:52:14.039681  602433 cri.go:89] found id: "c6a26bd3f3f3a9aadd06af1e7019a9a4ad95fe27fc8cd6cd2866891c0293ac91"
	I1213 11:52:14.039685  602433 cri.go:89] found id: "54cffecfcbe7d79dd9b85c2aea28df92440fb375b7e38669ef73479908f14bd0"
	I1213 11:52:14.039692  602433 cri.go:89] found id: "bfa9d999fddf9ef31c4e35a493c5e2700f9f33c2f1d8e506c0d15b86f7760d06"
	I1213 11:52:14.039695  602433 cri.go:89] found id: "135099d7b9d603df26fd1321cf6285c1801fdd808e45650834598b050c12ba25"
	I1213 11:52:14.039699  602433 cri.go:89] found id: ""
	I1213 11:52:14.039749  602433 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 11:52:14.055545  602433 out.go:203] 
	W1213 11:52:14.058538  602433 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:52:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:52:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 11:52:14.058566  602433 out.go:285] * 
	* 
	W1213 11:52:14.065019  602433 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:52:14.067936  602433 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-151605 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-151605
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-151605:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d",
	        "Created": "2025-12-13T11:49:50.135294946Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 597510,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:51:08.101502611Z",
	            "FinishedAt": "2025-12-13T11:51:07.236618995Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d/hosts",
	        "LogPath": "/var/lib/docker/containers/ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d/ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d-json.log",
	        "Name": "/default-k8s-diff-port-151605",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-151605:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-151605",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d",
	                "LowerDir": "/var/lib/docker/overlay2/5071f61a5ba74b8c26a46a195fc7ce2d5b47f49b801b792da027543cd1611276-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5071f61a5ba74b8c26a46a195fc7ce2d5b47f49b801b792da027543cd1611276/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5071f61a5ba74b8c26a46a195fc7ce2d5b47f49b801b792da027543cd1611276/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5071f61a5ba74b8c26a46a195fc7ce2d5b47f49b801b792da027543cd1611276/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-151605",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-151605/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-151605",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-151605",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-151605",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2ceb56575a00733dc15b62cf1f232e3cf32d78f9b6471db710187037ab35ab0e",
	            "SandboxKey": "/var/run/docker/netns/2ceb56575a00",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-151605": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:37:aa:59:76:dd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e01ba379a4d94c8de18912da562a485bb057ae2af70e58b76f1547550548184",
	                    "EndpointID": "144b78028dfa5749831619eb1cb4c4cbad0ba48c6675c0a36d0dd225e8e6321b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-151605",
	                        "ed91f41ddcee"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-151605 -n default-k8s-diff-port-151605
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-151605 -n default-k8s-diff-port-151605: exit status 2 (362.543019ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-151605 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-151605 logs -n 25: (1.439280797s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-522461 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-522461          │ jenkins │ v1.37.0 │ 13 Dec 25 11:47 UTC │ 13 Dec 25 11:47 UTC │
	│ delete  │ -p cert-options-522461                                                                                                                                                                                                                        │ cert-options-522461          │ jenkins │ v1.37.0 │ 13 Dec 25 11:47 UTC │ 13 Dec 25 11:47 UTC │
	│ start   │ -p old-k8s-version-051699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:47 UTC │ 13 Dec 25 11:48 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-051699 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │                     │
	│ stop    │ -p old-k8s-version-051699 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │ 13 Dec 25 11:48 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-051699 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │ 13 Dec 25 11:48 UTC │
	│ start   │ -p old-k8s-version-051699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │ 13 Dec 25 11:49 UTC │
	│ image   │ old-k8s-version-051699 image list --format=json                                                                                                                                                                                               │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ pause   │ -p old-k8s-version-051699 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │                     │
	│ delete  │ -p old-k8s-version-051699                                                                                                                                                                                                                     │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ delete  │ -p old-k8s-version-051699                                                                                                                                                                                                                     │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p cert-expiration-420007 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ delete  │ -p cert-expiration-420007                                                                                                                                                                                                                     │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-151605 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-151605 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-151605 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p embed-certs-326948 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │                     │
	│ stop    │ -p embed-certs-326948 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-326948 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │                     │
	│ image   │ default-k8s-diff-port-151605 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p default-k8s-diff-port-151605 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:51:32
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:51:32.984818  600084 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:51:32.985030  600084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:51:32.985057  600084 out.go:374] Setting ErrFile to fd 2...
	I1213 11:51:32.985074  600084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:51:32.985355  600084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:51:32.985789  600084 out.go:368] Setting JSON to false
	I1213 11:51:32.987378  600084 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12845,"bootTime":1765613848,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:51:32.987477  600084 start.go:143] virtualization:  
	I1213 11:51:32.992143  600084 out.go:179] * [embed-certs-326948] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:51:32.995397  600084 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:51:32.995477  600084 notify.go:221] Checking for updates...
	I1213 11:51:32.999132  600084 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:51:33.002659  600084 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:51:33.005858  600084 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:51:33.008906  600084 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:51:33.011815  600084 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:51:33.015232  600084 config.go:182] Loaded profile config "embed-certs-326948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:51:33.015855  600084 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:51:33.049060  600084 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:51:33.049189  600084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:51:33.164743  600084 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 11:51:33.152700014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:51:33.164855  600084 docker.go:319] overlay module found
	I1213 11:51:33.168098  600084 out.go:179] * Using the docker driver based on existing profile
	I1213 11:51:33.171115  600084 start.go:309] selected driver: docker
	I1213 11:51:33.171149  600084 start.go:927] validating driver "docker" against &{Name:embed-certs-326948 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-326948 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:33.171251  600084 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:51:33.172155  600084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:51:33.267963  600084 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 11:51:33.255540412 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:51:33.268293  600084 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:51:33.268316  600084 cni.go:84] Creating CNI manager for ""
	I1213 11:51:33.268363  600084 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:51:33.268399  600084 start.go:353] cluster config:
	{Name:embed-certs-326948 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-326948 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:33.271742  600084 out.go:179] * Starting "embed-certs-326948" primary control-plane node in "embed-certs-326948" cluster
	I1213 11:51:33.274786  600084 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:51:33.277765  600084 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:51:33.280645  600084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 11:51:33.280712  600084 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 11:51:33.280722  600084 cache.go:65] Caching tarball of preloaded images
	I1213 11:51:33.280822  600084 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 11:51:33.280831  600084 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 11:51:33.280950  600084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/config.json ...
	I1213 11:51:33.281156  600084 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:51:33.316515  600084 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:51:33.316534  600084 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:51:33.316548  600084 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:51:33.316578  600084 start.go:360] acquireMachinesLock for embed-certs-326948: {Name:mk006cdb726d13b418884982bd33ef960e248469 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:51:33.316632  600084 start.go:364] duration metric: took 33.814µs to acquireMachinesLock for "embed-certs-326948"
	I1213 11:51:33.316650  600084 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:51:33.316655  600084 fix.go:54] fixHost starting: 
	I1213 11:51:33.316919  600084 cli_runner.go:164] Run: docker container inspect embed-certs-326948 --format={{.State.Status}}
	I1213 11:51:33.334973  600084 fix.go:112] recreateIfNeeded on embed-certs-326948: state=Stopped err=<nil>
	W1213 11:51:33.335005  600084 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 11:51:34.989511  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	W1213 11:51:36.990627  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	I1213 11:51:33.338472  600084 out.go:252] * Restarting existing docker container for "embed-certs-326948" ...
	I1213 11:51:33.338560  600084 cli_runner.go:164] Run: docker start embed-certs-326948
	I1213 11:51:33.674634  600084 cli_runner.go:164] Run: docker container inspect embed-certs-326948 --format={{.State.Status}}
	I1213 11:51:33.699762  600084 kic.go:430] container "embed-certs-326948" state is running.
	I1213 11:51:33.700344  600084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-326948
	I1213 11:51:33.732990  600084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/config.json ...
	I1213 11:51:33.733226  600084 machine.go:94] provisionDockerMachine start ...
	I1213 11:51:33.733307  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:33.764596  600084 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:33.764923  600084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1213 11:51:33.764942  600084 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:51:33.768998  600084 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 11:51:36.919267  600084 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-326948
	
	I1213 11:51:36.919296  600084 ubuntu.go:182] provisioning hostname "embed-certs-326948"
	I1213 11:51:36.919366  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:36.937883  600084 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:36.938211  600084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1213 11:51:36.938229  600084 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-326948 && echo "embed-certs-326948" | sudo tee /etc/hostname
	I1213 11:51:37.106551  600084 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-326948
	
	I1213 11:51:37.106625  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:37.124749  600084 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:37.125076  600084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1213 11:51:37.125096  600084 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-326948' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-326948/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-326948' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:51:37.275900  600084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:51:37.275930  600084 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 11:51:37.275965  600084 ubuntu.go:190] setting up certificates
	I1213 11:51:37.275981  600084 provision.go:84] configureAuth start
	I1213 11:51:37.276044  600084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-326948
	I1213 11:51:37.293863  600084 provision.go:143] copyHostCerts
	I1213 11:51:37.293949  600084 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 11:51:37.293959  600084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 11:51:37.294040  600084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 11:51:37.294179  600084 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 11:51:37.294192  600084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 11:51:37.294224  600084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 11:51:37.294287  600084 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 11:51:37.294293  600084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 11:51:37.294321  600084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 11:51:37.294384  600084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.embed-certs-326948 san=[127.0.0.1 192.168.76.2 embed-certs-326948 localhost minikube]
	I1213 11:51:37.679084  600084 provision.go:177] copyRemoteCerts
	I1213 11:51:37.679161  600084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:51:37.679200  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:37.697046  600084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:51:37.807285  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:51:37.825098  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:51:37.842815  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:51:37.861334  600084 provision.go:87] duration metric: took 585.3379ms to configureAuth
	I1213 11:51:37.861362  600084 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:51:37.861568  600084 config.go:182] Loaded profile config "embed-certs-326948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:51:37.861675  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:37.878716  600084 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:37.879040  600084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1213 11:51:37.879059  600084 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:51:38.304104  600084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:51:38.304127  600084 machine.go:97] duration metric: took 4.570880668s to provisionDockerMachine
	I1213 11:51:38.304139  600084 start.go:293] postStartSetup for "embed-certs-326948" (driver="docker")
	I1213 11:51:38.304150  600084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:51:38.304209  600084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:51:38.304248  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:38.326704  600084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:51:38.431776  600084 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:51:38.435463  600084 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:51:38.435494  600084 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:51:38.435539  600084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 11:51:38.435620  600084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 11:51:38.435732  600084 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 11:51:38.435840  600084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:51:38.443915  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:51:38.462413  600084 start.go:296] duration metric: took 158.257898ms for postStartSetup
	I1213 11:51:38.462524  600084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:51:38.462577  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:38.479236  600084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:51:38.580880  600084 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:51:38.585843  600084 fix.go:56] duration metric: took 5.269178309s for fixHost
	I1213 11:51:38.585868  600084 start.go:83] releasing machines lock for "embed-certs-326948", held for 5.269228287s
	I1213 11:51:38.585943  600084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-326948
	I1213 11:51:38.602941  600084 ssh_runner.go:195] Run: cat /version.json
	I1213 11:51:38.603003  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:38.603335  600084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:51:38.603387  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:38.621757  600084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:51:38.623198  600084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:51:38.820455  600084 ssh_runner.go:195] Run: systemctl --version
	I1213 11:51:38.827256  600084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:51:38.865114  600084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:51:38.869764  600084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:51:38.869893  600084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:51:38.877975  600084 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 11:51:38.878002  600084 start.go:496] detecting cgroup driver to use...
	I1213 11:51:38.878050  600084 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:51:38.878120  600084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:51:38.893847  600084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:51:38.907331  600084 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:51:38.907409  600084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:51:38.924809  600084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:51:38.939709  600084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:51:39.066898  600084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:51:39.196417  600084 docker.go:234] disabling docker service ...
	I1213 11:51:39.196490  600084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:51:39.211616  600084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:51:39.224798  600084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:51:39.350599  600084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:51:39.473229  600084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:51:39.492626  600084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:51:39.507486  600084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 11:51:39.507624  600084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:51:39.517765  600084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:51:39.517846  600084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:51:39.527583  600084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:51:39.536761  600084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:51:39.546876  600084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:51:39.556061  600084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:51:39.565849  600084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:51:39.574794  600084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:51:39.584606  600084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:51:39.592334  600084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:51:39.599849  600084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:39.725768  600084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:51:39.955845  600084 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:51:39.955919  600084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:51:39.959961  600084 start.go:564] Will wait 60s for crictl version
	I1213 11:51:39.960078  600084 ssh_runner.go:195] Run: which crictl
	I1213 11:51:39.964208  600084 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:51:39.992782  600084 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:51:39.992945  600084 ssh_runner.go:195] Run: crio --version
	I1213 11:51:40.033907  600084 ssh_runner.go:195] Run: crio --version
	I1213 11:51:40.067823  600084 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 11:51:40.070931  600084 cli_runner.go:164] Run: docker network inspect embed-certs-326948 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:51:40.089047  600084 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 11:51:40.093885  600084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:51:40.104954  600084 kubeadm.go:884] updating cluster {Name:embed-certs-326948 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-326948 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:51:40.105077  600084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 11:51:40.105149  600084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:51:40.144948  600084 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:51:40.144974  600084 crio.go:433] Images already preloaded, skipping extraction
	I1213 11:51:40.145031  600084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:51:40.174783  600084 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:51:40.174861  600084 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:51:40.174883  600084 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1213 11:51:40.175010  600084 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-326948 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-326948 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:51:40.175134  600084 ssh_runner.go:195] Run: crio config
	I1213 11:51:40.240434  600084 cni.go:84] Creating CNI manager for ""
	I1213 11:51:40.240460  600084 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:51:40.240504  600084 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:51:40.240534  600084 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-326948 NodeName:embed-certs-326948 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:51:40.240692  600084 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-326948"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:51:40.240768  600084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 11:51:40.249060  600084 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:51:40.249132  600084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:51:40.256947  600084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1213 11:51:40.272622  600084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:51:40.290341  600084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1213 11:51:40.303810  600084 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:51:40.307420  600084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:51:40.318013  600084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:40.435145  600084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:51:40.453384  600084 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948 for IP: 192.168.76.2
	I1213 11:51:40.453407  600084 certs.go:195] generating shared ca certs ...
	I1213 11:51:40.453423  600084 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:40.453556  600084 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:51:40.453613  600084 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:51:40.453625  600084 certs.go:257] generating profile certs ...
	I1213 11:51:40.453718  600084 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/client.key
	I1213 11:51:40.453788  600084 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/apiserver.key.dff061d2
	I1213 11:51:40.453841  600084 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/proxy-client.key
	I1213 11:51:40.453974  600084 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:51:40.454014  600084 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:51:40.454026  600084 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:51:40.454057  600084 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:51:40.454102  600084 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:51:40.454130  600084 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:51:40.454192  600084 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:51:40.454787  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:51:40.479260  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:51:40.519669  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:51:40.544142  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:51:40.567634  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1213 11:51:40.588503  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:51:40.608743  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:51:40.648222  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:51:40.672215  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:51:40.697478  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:51:40.727586  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:51:40.748998  600084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:51:40.762826  600084 ssh_runner.go:195] Run: openssl version
	I1213 11:51:40.769231  600084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:51:40.776982  600084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:51:40.788856  600084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:51:40.793192  600084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:51:40.793280  600084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:51:40.834721  600084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:51:40.842495  600084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:40.849954  600084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:51:40.857688  600084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:40.861534  600084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:40.861604  600084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:40.904007  600084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:51:40.911605  600084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:51:40.919642  600084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:51:40.927261  600084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:51:40.931257  600084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:51:40.931324  600084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:51:40.972944  600084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:51:40.981040  600084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:51:40.986918  600084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:51:41.029892  600084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:51:41.071287  600084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:51:41.117961  600084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:51:41.172175  600084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:51:41.214776  600084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:51:41.271378  600084 kubeadm.go:401] StartCluster: {Name:embed-certs-326948 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-326948 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:41.271466  600084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:51:41.271610  600084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:51:41.347399  600084 cri.go:89] found id: "2f0d882fac60f1616055bed06c1f6058d2f4d9771c371fa9e130d01762278744"
	I1213 11:51:41.347422  600084 cri.go:89] found id: "5fa45fd0696ef89615d1d81b1bf2769d38c87713975e43422c105cb0d61cfdaa"
	I1213 11:51:41.347432  600084 cri.go:89] found id: "cb833c8e8af6645f23e9e2891cd88798a8d4211065330a18962b7d19db79c7ba"
	I1213 11:51:41.347436  600084 cri.go:89] found id: ""
	I1213 11:51:41.347488  600084 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 11:51:41.370430  600084 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:51:41Z" level=error msg="open /run/runc: no such file or directory"
	I1213 11:51:41.370523  600084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:51:41.387432  600084 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 11:51:41.387453  600084 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 11:51:41.387504  600084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:51:41.404349  600084 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:51:41.404951  600084 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-326948" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:51:41.405227  600084 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-354468/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-326948" cluster setting kubeconfig missing "embed-certs-326948" context setting]
	I1213 11:51:41.405707  600084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:41.407160  600084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:51:41.422863  600084 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1213 11:51:41.422893  600084 kubeadm.go:602] duration metric: took 35.433698ms to restartPrimaryControlPlane
	I1213 11:51:41.422904  600084 kubeadm.go:403] duration metric: took 151.538382ms to StartCluster
	I1213 11:51:41.422919  600084 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:41.422991  600084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:51:41.424281  600084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:41.424506  600084 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:51:41.424796  600084 config.go:182] Loaded profile config "embed-certs-326948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:51:41.424843  600084 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:51:41.424909  600084 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-326948"
	I1213 11:51:41.424922  600084 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-326948"
	W1213 11:51:41.424933  600084 addons.go:248] addon storage-provisioner should already be in state true
	I1213 11:51:41.424954  600084 host.go:66] Checking if "embed-certs-326948" exists ...
	I1213 11:51:41.425388  600084 cli_runner.go:164] Run: docker container inspect embed-certs-326948 --format={{.State.Status}}
	I1213 11:51:41.425662  600084 addons.go:70] Setting dashboard=true in profile "embed-certs-326948"
	I1213 11:51:41.425688  600084 addons.go:239] Setting addon dashboard=true in "embed-certs-326948"
	W1213 11:51:41.425695  600084 addons.go:248] addon dashboard should already be in state true
	I1213 11:51:41.425719  600084 host.go:66] Checking if "embed-certs-326948" exists ...
	I1213 11:51:41.426172  600084 cli_runner.go:164] Run: docker container inspect embed-certs-326948 --format={{.State.Status}}
	I1213 11:51:41.426573  600084 addons.go:70] Setting default-storageclass=true in profile "embed-certs-326948"
	I1213 11:51:41.426594  600084 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-326948"
	I1213 11:51:41.426864  600084 cli_runner.go:164] Run: docker container inspect embed-certs-326948 --format={{.State.Status}}
	I1213 11:51:41.429153  600084 out.go:179] * Verifying Kubernetes components...
	I1213 11:51:41.432599  600084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:41.473852  600084 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:51:41.481581  600084 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:51:41.481610  600084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 11:51:41.481677  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:41.491460  600084 addons.go:239] Setting addon default-storageclass=true in "embed-certs-326948"
	W1213 11:51:41.491487  600084 addons.go:248] addon default-storageclass should already be in state true
	I1213 11:51:41.491659  600084 host.go:66] Checking if "embed-certs-326948" exists ...
	I1213 11:51:41.492109  600084 cli_runner.go:164] Run: docker container inspect embed-certs-326948 --format={{.State.Status}}
	I1213 11:51:41.499858  600084 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 11:51:41.507260  600084 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1213 11:51:39.490140  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	W1213 11:51:41.493670  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	I1213 11:51:41.512405  600084 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 11:51:41.512434  600084 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 11:51:41.512504  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:41.530771  600084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:51:41.543703  600084 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:41.543723  600084 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 11:51:41.543794  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:41.565479  600084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:51:41.597329  600084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:51:41.747728  600084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:51:41.787611  600084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:51:41.804414  600084 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 11:51:41.804439  600084 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 11:51:41.862761  600084 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 11:51:41.862793  600084 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 11:51:41.892332  600084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:41.942039  600084 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 11:51:41.942073  600084 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 11:51:42.049866  600084 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 11:51:42.049890  600084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 11:51:42.105809  600084 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 11:51:42.105838  600084 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 11:51:42.137871  600084 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 11:51:42.137951  600084 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 11:51:42.161626  600084 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 11:51:42.161724  600084 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 11:51:42.189524  600084 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 11:51:42.189609  600084 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 11:51:42.228273  600084 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:51:42.228301  600084 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 11:51:42.252680  600084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:43.990581  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	W1213 11:51:45.990825  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	I1213 11:51:47.845489  600084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.097726855s)
	I1213 11:51:47.845551  600084 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.057871666s)
	I1213 11:51:47.845574  600084 node_ready.go:35] waiting up to 6m0s for node "embed-certs-326948" to be "Ready" ...
	I1213 11:51:47.845894  600084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.953538331s)
	I1213 11:51:47.846179  600084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.593462445s)
	I1213 11:51:47.849424  600084 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-326948 addons enable metrics-server
	
	I1213 11:51:47.877363  600084 node_ready.go:49] node "embed-certs-326948" is "Ready"
	I1213 11:51:47.877395  600084 node_ready.go:38] duration metric: took 31.802943ms for node "embed-certs-326948" to be "Ready" ...
	I1213 11:51:47.877410  600084 api_server.go:52] waiting for apiserver process to appear ...
	I1213 11:51:47.877470  600084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:51:47.886608  600084 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1213 11:51:47.889531  600084 addons.go:530] duration metric: took 6.464674737s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1213 11:51:47.906029  600084 api_server.go:72] duration metric: took 6.481484859s to wait for apiserver process to appear ...
	I1213 11:51:47.906123  600084 api_server.go:88] waiting for apiserver healthz status ...
	I1213 11:51:47.906159  600084 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 11:51:47.914925  600084 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1213 11:51:47.916581  600084 api_server.go:141] control plane version: v1.34.2
	I1213 11:51:47.916644  600084 api_server.go:131] duration metric: took 10.500339ms to wait for apiserver health ...
	I1213 11:51:47.916678  600084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 11:51:47.919944  600084 system_pods.go:59] 8 kube-system pods found
	I1213 11:51:47.920026  600084 system_pods.go:61] "coredns-66bc5c9577-459p2" [3b2fae9b-e0bd-4506-84c9-4385a6c2997c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:51:47.920053  600084 system_pods.go:61] "etcd-embed-certs-326948" [520e544b-4ca6-411f-927a-867164c6ae12] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 11:51:47.920091  600084 system_pods.go:61] "kindnet-q82mh" [2861cef6-0bd3-400e-ad74-ce89a58a69eb] Running
	I1213 11:51:47.920118  600084 system_pods.go:61] "kube-apiserver-embed-certs-326948" [e88d539d-e0f2-4396-a899-615d61945720] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 11:51:47.920142  600084 system_pods.go:61] "kube-controller-manager-embed-certs-326948" [61318a61-ad9f-4f1f-b1e7-1238077d0d53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 11:51:47.920182  600084 system_pods.go:61] "kube-proxy-5thrz" [b6f2714d-7089-4d6e-94ae-c0ec1ed42a46] Running
	I1213 11:51:47.920209  600084 system_pods.go:61] "kube-scheduler-embed-certs-326948" [8412daf2-f4c8-4870-a6e1-3a852d9c4929] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 11:51:47.920228  600084 system_pods.go:61] "storage-provisioner" [4cddf997-bcc1-4a6d-accf-779a6c4d1557] Running
	I1213 11:51:47.920265  600084 system_pods.go:74] duration metric: took 3.566329ms to wait for pod list to return data ...
	I1213 11:51:47.920291  600084 default_sa.go:34] waiting for default service account to be created ...
	I1213 11:51:47.923077  600084 default_sa.go:45] found service account: "default"
	I1213 11:51:47.923140  600084 default_sa.go:55] duration metric: took 2.821002ms for default service account to be created ...
	I1213 11:51:47.923164  600084 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 11:51:47.926500  600084 system_pods.go:86] 8 kube-system pods found
	I1213 11:51:47.926578  600084 system_pods.go:89] "coredns-66bc5c9577-459p2" [3b2fae9b-e0bd-4506-84c9-4385a6c2997c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:51:47.926600  600084 system_pods.go:89] "etcd-embed-certs-326948" [520e544b-4ca6-411f-927a-867164c6ae12] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 11:51:47.926638  600084 system_pods.go:89] "kindnet-q82mh" [2861cef6-0bd3-400e-ad74-ce89a58a69eb] Running
	I1213 11:51:47.926663  600084 system_pods.go:89] "kube-apiserver-embed-certs-326948" [e88d539d-e0f2-4396-a899-615d61945720] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 11:51:47.926685  600084 system_pods.go:89] "kube-controller-manager-embed-certs-326948" [61318a61-ad9f-4f1f-b1e7-1238077d0d53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 11:51:47.926722  600084 system_pods.go:89] "kube-proxy-5thrz" [b6f2714d-7089-4d6e-94ae-c0ec1ed42a46] Running
	I1213 11:51:47.926747  600084 system_pods.go:89] "kube-scheduler-embed-certs-326948" [8412daf2-f4c8-4870-a6e1-3a852d9c4929] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 11:51:47.926768  600084 system_pods.go:89] "storage-provisioner" [4cddf997-bcc1-4a6d-accf-779a6c4d1557] Running
	I1213 11:51:47.926804  600084 system_pods.go:126] duration metric: took 3.62274ms to wait for k8s-apps to be running ...
	I1213 11:51:47.926831  600084 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 11:51:47.926915  600084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:51:47.948025  600084 system_svc.go:56] duration metric: took 21.184819ms WaitForService to wait for kubelet
	I1213 11:51:47.948056  600084 kubeadm.go:587] duration metric: took 6.523516492s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:51:47.948075  600084 node_conditions.go:102] verifying NodePressure condition ...
	I1213 11:51:47.952400  600084 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 11:51:47.952434  600084 node_conditions.go:123] node cpu capacity is 2
	I1213 11:51:47.952446  600084 node_conditions.go:105] duration metric: took 4.33829ms to run NodePressure ...
	I1213 11:51:47.952467  600084 start.go:242] waiting for startup goroutines ...
	I1213 11:51:47.952476  600084 start.go:247] waiting for cluster config update ...
	I1213 11:51:47.952487  600084 start.go:256] writing updated cluster config ...
	I1213 11:51:47.952795  600084 ssh_runner.go:195] Run: rm -f paused
	I1213 11:51:47.957541  600084 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 11:51:47.962334  600084 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-459p2" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 11:51:48.490457  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	W1213 11:51:50.990468  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	W1213 11:51:49.994052  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	W1213 11:51:52.468704  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	W1213 11:51:53.489991  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	W1213 11:51:55.490272  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	W1213 11:51:57.490898  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	W1213 11:51:54.469787  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	W1213 11:51:56.968982  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	I1213 11:51:58.490193  597382 pod_ready.go:94] pod "coredns-66bc5c9577-pr2h6" is "Ready"
	I1213 11:51:58.490223  597382 pod_ready.go:86] duration metric: took 35.006040895s for pod "coredns-66bc5c9577-pr2h6" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:58.493216  597382 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:58.498150  597382 pod_ready.go:94] pod "etcd-default-k8s-diff-port-151605" is "Ready"
	I1213 11:51:58.498179  597382 pod_ready.go:86] duration metric: took 4.934085ms for pod "etcd-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:58.500696  597382 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:58.506302  597382 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-151605" is "Ready"
	I1213 11:51:58.506330  597382 pod_ready.go:86] duration metric: took 5.605984ms for pod "kube-apiserver-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:58.508966  597382 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:58.688031  597382 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-151605" is "Ready"
	I1213 11:51:58.688059  597382 pod_ready.go:86] duration metric: took 179.066421ms for pod "kube-controller-manager-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:58.888161  597382 pod_ready.go:83] waiting for pod "kube-proxy-7sl78" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:59.289184  597382 pod_ready.go:94] pod "kube-proxy-7sl78" is "Ready"
	I1213 11:51:59.289209  597382 pod_ready.go:86] duration metric: took 401.020443ms for pod "kube-proxy-7sl78" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:59.488774  597382 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:59.888078  597382 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-151605" is "Ready"
	I1213 11:51:59.888110  597382 pod_ready.go:86] duration metric: took 399.308972ms for pod "kube-scheduler-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:59.888123  597382 pod_ready.go:40] duration metric: took 36.478953164s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 11:51:59.943127  597382 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1213 11:51:59.946349  597382 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-151605" cluster and "default" namespace by default
	W1213 11:51:58.971675  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	W1213 11:52:01.470668  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	W1213 11:52:03.968114  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	W1213 11:52:05.968529  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	W1213 11:52:08.467468  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	W1213 11:52:10.468103  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	W1213 11:52:12.468238  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 13 11:51:58 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:51:58.944057988Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:51:58 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:51:58.955187149Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:51:58 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:51:58.958968856Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:51:58 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:51:58.988010237Z" level=info msg="Created container bfa9d999fddf9ef31c4e35a493c5e2700f9f33c2f1d8e506c0d15b86f7760d06: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kcvpc/dashboard-metrics-scraper" id=b5ecc76f-fcf5-4540-9d63-5cc2cb959f85 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:51:58 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:51:58.990598983Z" level=info msg="Starting container: bfa9d999fddf9ef31c4e35a493c5e2700f9f33c2f1d8e506c0d15b86f7760d06" id=43f8e72a-6c49-4e0e-aabe-2726725aee8f name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:51:58 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:51:58.994152315Z" level=info msg="Started container" PID=1669 containerID=bfa9d999fddf9ef31c4e35a493c5e2700f9f33c2f1d8e506c0d15b86f7760d06 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kcvpc/dashboard-metrics-scraper id=43f8e72a-6c49-4e0e-aabe-2726725aee8f name=/runtime.v1.RuntimeService/StartContainer sandboxID=fe3adba33adf011e45bf7d39801cc9546c49c46b9050676a796e5d95afd195d8
	Dec 13 11:51:58 default-k8s-diff-port-151605 conmon[1667]: conmon bfa9d999fddf9ef31c4e <ninfo>: container 1669 exited with status 1
	Dec 13 11:51:59 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:51:59.157922348Z" level=info msg="Removing container: fa8f51639ae63fa3b4f5e394f684c531467e95db08e68abfa9aa260a2e769949" id=1d2b5ccb-f331-420f-b737-f825a6ffac30 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 11:51:59 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:51:59.180275628Z" level=info msg="Error loading conmon cgroup of container fa8f51639ae63fa3b4f5e394f684c531467e95db08e68abfa9aa260a2e769949: cgroup deleted" id=1d2b5ccb-f331-420f-b737-f825a6ffac30 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 11:51:59 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:51:59.184709828Z" level=info msg="Removed container fa8f51639ae63fa3b4f5e394f684c531467e95db08e68abfa9aa260a2e769949: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kcvpc/dashboard-metrics-scraper" id=1d2b5ccb-f331-420f-b737-f825a6ffac30 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.828529863Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.832673796Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.832705304Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.83272768Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.83594034Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.835977026Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.835994864Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.839430419Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.839481053Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.83950159Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.847090186Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.847124935Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.847146958Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.850466885Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.850503965Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	bfa9d999fddf9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   2                   fe3adba33adf0       dashboard-metrics-scraper-6ffb444bf9-kcvpc             kubernetes-dashboard
	8674225d875f7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago      Running             storage-provisioner         2                   3720feecdda54       storage-provisioner                                    kube-system
	135099d7b9d60       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   43 seconds ago      Running             kubernetes-dashboard        0                   5e1a8e85bf7cb       kubernetes-dashboard-855c9754f9-2j5n9                  kubernetes-dashboard
	a55ead4e461e5       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   6d067a48860b9       busybox                                                default
	36769a73ca236       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago      Running             coredns                     1                   ebe97b7ec56d7       coredns-66bc5c9577-pr2h6                               kube-system
	73e76e8c891b9       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           52 seconds ago      Running             kindnet-cni                 1                   4520ca4e3fef1       kindnet-4bq9f                                          kube-system
	fba9365f141d5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   3720feecdda54       storage-provisioner                                    kube-system
	0f741173607eb       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                           52 seconds ago      Running             kube-proxy                  1                   e171d74fb2af3       kube-proxy-7sl78                                       kube-system
	cbd9d49b05b8a       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                           59 seconds ago      Running             etcd                        1                   c5660b57e2577       etcd-default-k8s-diff-port-151605                      kube-system
	41f26b68d203d       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                           59 seconds ago      Running             kube-apiserver              1                   3b92d362786a8       kube-apiserver-default-k8s-diff-port-151605            kube-system
	c6a26bd3f3f3a       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                           59 seconds ago      Running             kube-controller-manager     1                   2b80b52319610       kube-controller-manager-default-k8s-diff-port-151605   kube-system
	54cffecfcbe7d       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                           59 seconds ago      Running             kube-scheduler              1                   63a752d725e7c       kube-scheduler-default-k8s-diff-port-151605            kube-system
	
	
	==> coredns [36769a73ca236b1b9aa92ad718f5b335f85d3c1cb3912e1f3fbf541b2764e758] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53827 - 44965 "HINFO IN 7010563107646899880.874189890595753488. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.054452492s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-151605
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-151605
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
	                    minikube.k8s.io/name=default-k8s-diff-port-151605
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T11_50_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 11:50:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-151605
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 11:52:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 11:51:52 +0000   Sat, 13 Dec 2025 11:50:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 11:51:52 +0000   Sat, 13 Dec 2025 11:50:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 11:51:52 +0000   Sat, 13 Dec 2025 11:50:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 11:51:52 +0000   Sat, 13 Dec 2025 11:50:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-151605
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                a385de42-c8e0-4943-b893-df4c54e93d41
	  Boot ID:                    9bd24839-35d9-4392-a0e0-b2e0b9823eaa
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-pr2h6                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     109s
	  kube-system                 etcd-default-k8s-diff-port-151605                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         114s
	  kube-system                 kindnet-4bq9f                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-default-k8s-diff-port-151605             250m (12%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-151605    200m (10%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-7sl78                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-default-k8s-diff-port-151605             100m (5%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-kcvpc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2j5n9                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 105s                 kube-proxy       
	  Normal   Starting                 52s                  kube-proxy       
	  Warning  CgroupV1                 2m4s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m4s (x8 over 2m4s)  kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  114s                 kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 114s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    114s                 kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     114s                 kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasSufficientPID
	  Normal   Starting                 114s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           110s                 node-controller  Node default-k8s-diff-port-151605 event: Registered Node default-k8s-diff-port-151605 in Controller
	  Normal   NodeReady                94s                  kubelet          Node default-k8s-diff-port-151605 status is now: NodeReady
	  Normal   Starting                 61s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 61s)    kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 61s)    kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 61s)    kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                  node-controller  Node default-k8s-diff-port-151605 event: Registered Node default-k8s-diff-port-151605 in Controller
	
	
	==> dmesg <==
	[Dec13 11:21] overlayfs: idmapped layers are currently not supported
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	[Dec13 11:46] overlayfs: idmapped layers are currently not supported
	[ +24.639766] overlayfs: idmapped layers are currently not supported
	[ +18.732422] overlayfs: idmapped layers are currently not supported
	[Dec13 11:47] overlayfs: idmapped layers are currently not supported
	[Dec13 11:48] overlayfs: idmapped layers are currently not supported
	[Dec13 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.618483] overlayfs: idmapped layers are currently not supported
	[Dec13 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.749488] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [cbd9d49b05b8a5dd0dc77bf63238bdf30ee239621287d026e486c91a38c69194] <==
	{"level":"warn","ts":"2025-12-13T11:51:19.407165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.457328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.496259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.531073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.635634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.690691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.727390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.756994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.841684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.847769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.887161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.923927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.952456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.976332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.012557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.050115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.097100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.147826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.173050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.309263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.399169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.402948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.417718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.435842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.520551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54416","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:52:15 up  3:34,  0 user,  load average: 2.51, 2.66, 2.27
	Linux default-k8s-diff-port-151605 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [73e76e8c891b9ea190a1e89188926f8eb848c83067c164ab1b20f2f773b8aaff] <==
	I1213 11:51:22.620567       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 11:51:22.622354       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 11:51:22.622553       1 main.go:148] setting mtu 1500 for CNI 
	I1213 11:51:22.624414       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 11:51:22.624515       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T11:51:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 11:51:22.827597       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 11:51:22.827621       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 11:51:22.827636       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 11:51:22.827969       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1213 11:51:52.828502       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1213 11:51:52.828698       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1213 11:51:52.828801       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1213 11:51:52.828893       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1213 11:51:54.427849       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 11:51:54.427944       1 metrics.go:72] Registering metrics
	I1213 11:51:54.428040       1 controller.go:711] "Syncing nftables rules"
	I1213 11:52:02.827591       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:52:02.828266       1 main.go:301] handling current node
	I1213 11:52:12.827745       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:52:12.827786       1 main.go:301] handling current node
	
	
	==> kube-apiserver [41f26b68d203d9d83d81376bab5feea3fb613ac275331c49aa37fbebfa938c29] <==
	I1213 11:51:21.725858       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 11:51:21.759755       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1213 11:51:21.762965       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1213 11:51:21.772671       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1213 11:51:21.772722       1 policy_source.go:240] refreshing policies
	I1213 11:51:21.803156       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1213 11:51:21.803210       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 11:51:21.803616       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 11:51:21.803710       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 11:51:21.803717       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 11:51:21.808110       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 11:51:21.814871       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 11:51:21.823863       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 11:51:21.838278       1 cache.go:39] Caches are synced for autoregister controller
	I1213 11:51:22.015949       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 11:51:22.278191       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 11:51:22.885128       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 11:51:22.939762       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 11:51:22.978407       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 11:51:22.993095       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 11:51:23.125902       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.12.109"}
	I1213 11:51:23.143415       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.67.232"}
	I1213 11:51:24.875862       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 11:51:25.224913       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 11:51:25.345840       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c6a26bd3f3f3a9aadd06af1e7019a9a4ad95fe27fc8cd6cd2866891c0293ac91] <==
	I1213 11:51:24.777043       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1213 11:51:24.777078       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 11:51:24.776934       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 11:51:24.780777       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 11:51:24.786984       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1213 11:51:24.794257       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 11:51:24.798573       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 11:51:24.802948       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 11:51:24.804791       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 11:51:24.804810       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 11:51:24.804818       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 11:51:24.810033       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 11:51:24.810150       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 11:51:24.815171       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 11:51:24.815464       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 11:51:24.818750       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 11:51:24.818842       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1213 11:51:24.818865       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 11:51:24.818875       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 11:51:24.818889       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 11:51:24.818898       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 11:51:24.818907       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 11:51:24.821303       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1213 11:51:24.821395       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 11:51:24.828194       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [0f741173607eb6e99619529190004990e5a1a175b044f55053251c961fb0bcdc] <==
	I1213 11:51:22.848936       1 server_linux.go:53] "Using iptables proxy"
	I1213 11:51:23.031278       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 11:51:23.139803       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 11:51:23.139844       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1213 11:51:23.139920       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 11:51:23.193993       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 11:51:23.194129       1 server_linux.go:132] "Using iptables Proxier"
	I1213 11:51:23.198659       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 11:51:23.199058       1 server.go:527] "Version info" version="v1.34.2"
	I1213 11:51:23.199451       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:51:23.201030       1 config.go:200] "Starting service config controller"
	I1213 11:51:23.201086       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 11:51:23.201143       1 config.go:106] "Starting endpoint slice config controller"
	I1213 11:51:23.201185       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 11:51:23.201235       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 11:51:23.201268       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 11:51:23.202083       1 config.go:309] "Starting node config controller"
	I1213 11:51:23.202145       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 11:51:23.202187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 11:51:23.303098       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 11:51:23.303206       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 11:51:23.303233       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [54cffecfcbe7d79dd9b85c2aea28df92440fb375b7e38669ef73479908f14bd0] <==
	I1213 11:51:20.694480       1 serving.go:386] Generated self-signed cert in-memory
	I1213 11:51:22.121015       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 11:51:22.122797       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:51:22.141739       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1213 11:51:22.141787       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1213 11:51:22.141840       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:51:22.141854       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:51:22.141959       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 11:51:22.141967       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 11:51:22.142752       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 11:51:22.142901       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 11:51:22.245345       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1213 11:51:22.245451       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:51:22.246191       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Dec 13 11:51:26 default-k8s-diff-port-151605 kubelet[784]: E1213 11:51:26.536663     784 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Dec 13 11:51:26 default-k8s-diff-port-151605 kubelet[784]: E1213 11:51:26.536717     784 projected.go:196] Error preparing data for projected volume kube-api-access-2l7df for pod kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2j5n9: failed to sync configmap cache: timed out waiting for the condition
	Dec 13 11:51:26 default-k8s-diff-port-151605 kubelet[784]: E1213 11:51:26.536821     784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f4636eef-77b2-455c-a3f1-d90d2318c5ec-kube-api-access-2l7df podName:f4636eef-77b2-455c-a3f1-d90d2318c5ec nodeName:}" failed. No retries permitted until 2025-12-13 11:51:27.036794267 +0000 UTC m=+12.329948953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2l7df" (UniqueName: "kubernetes.io/projected/f4636eef-77b2-455c-a3f1-d90d2318c5ec-kube-api-access-2l7df") pod "kubernetes-dashboard-855c9754f9-2j5n9" (UID: "f4636eef-77b2-455c-a3f1-d90d2318c5ec") : failed to sync configmap cache: timed out waiting for the condition
	Dec 13 11:51:26 default-k8s-diff-port-151605 kubelet[784]: E1213 11:51:26.538834     784 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Dec 13 11:51:26 default-k8s-diff-port-151605 kubelet[784]: E1213 11:51:26.538878     784 projected.go:196] Error preparing data for projected volume kube-api-access-7ntcx for pod kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kcvpc: failed to sync configmap cache: timed out waiting for the condition
	Dec 13 11:51:26 default-k8s-diff-port-151605 kubelet[784]: E1213 11:51:26.538949     784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29e66af1-6135-495a-9b05-318b350b1ca2-kube-api-access-7ntcx podName:29e66af1-6135-495a-9b05-318b350b1ca2 nodeName:}" failed. No retries permitted until 2025-12-13 11:51:27.038924081 +0000 UTC m=+12.332078766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7ntcx" (UniqueName: "kubernetes.io/projected/29e66af1-6135-495a-9b05-318b350b1ca2-kube-api-access-7ntcx") pod "dashboard-metrics-scraper-6ffb444bf9-kcvpc" (UID: "29e66af1-6135-495a-9b05-318b350b1ca2") : failed to sync configmap cache: timed out waiting for the condition
	Dec 13 11:51:27 default-k8s-diff-port-151605 kubelet[784]: W1213 11:51:27.458504     784 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d/crio-fe3adba33adf011e45bf7d39801cc9546c49c46b9050676a796e5d95afd195d8 WatchSource:0}: Error finding container fe3adba33adf011e45bf7d39801cc9546c49c46b9050676a796e5d95afd195d8: Status 404 returned error can't find the container with id fe3adba33adf011e45bf7d39801cc9546c49c46b9050676a796e5d95afd195d8
	Dec 13 11:51:28 default-k8s-diff-port-151605 kubelet[784]: I1213 11:51:28.039290     784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 13 11:51:32 default-k8s-diff-port-151605 kubelet[784]: I1213 11:51:32.108612     784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2j5n9" podStartSLOduration=2.9886922350000003 podStartE2EDuration="7.108596274s" podCreationTimestamp="2025-12-13 11:51:25 +0000 UTC" firstStartedPulling="2025-12-13 11:51:27.177488365 +0000 UTC m=+12.470643051" lastFinishedPulling="2025-12-13 11:51:31.297392404 +0000 UTC m=+16.590547090" observedRunningTime="2025-12-13 11:51:32.107998108 +0000 UTC m=+17.401152794" watchObservedRunningTime="2025-12-13 11:51:32.108596274 +0000 UTC m=+17.401750969"
	Dec 13 11:51:38 default-k8s-diff-port-151605 kubelet[784]: I1213 11:51:38.080356     784 scope.go:117] "RemoveContainer" containerID="971520c28fa5bd7c19e9d003a8329a396311c4af31a4188ef10cd318ae256eb5"
	Dec 13 11:51:39 default-k8s-diff-port-151605 kubelet[784]: I1213 11:51:39.084516     784 scope.go:117] "RemoveContainer" containerID="971520c28fa5bd7c19e9d003a8329a396311c4af31a4188ef10cd318ae256eb5"
	Dec 13 11:51:39 default-k8s-diff-port-151605 kubelet[784]: I1213 11:51:39.085498     784 scope.go:117] "RemoveContainer" containerID="fa8f51639ae63fa3b4f5e394f684c531467e95db08e68abfa9aa260a2e769949"
	Dec 13 11:51:39 default-k8s-diff-port-151605 kubelet[784]: E1213 11:51:39.085791     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kcvpc_kubernetes-dashboard(29e66af1-6135-495a-9b05-318b350b1ca2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kcvpc" podUID="29e66af1-6135-495a-9b05-318b350b1ca2"
	Dec 13 11:51:47 default-k8s-diff-port-151605 kubelet[784]: I1213 11:51:47.433963     784 scope.go:117] "RemoveContainer" containerID="fa8f51639ae63fa3b4f5e394f684c531467e95db08e68abfa9aa260a2e769949"
	Dec 13 11:51:47 default-k8s-diff-port-151605 kubelet[784]: E1213 11:51:47.434699     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kcvpc_kubernetes-dashboard(29e66af1-6135-495a-9b05-318b350b1ca2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kcvpc" podUID="29e66af1-6135-495a-9b05-318b350b1ca2"
	Dec 13 11:51:53 default-k8s-diff-port-151605 kubelet[784]: I1213 11:51:53.122770     784 scope.go:117] "RemoveContainer" containerID="fba9365f141d5c048e73de3c4b23b2c1a27c25daee983fc11dd819f2303586c1"
	Dec 13 11:51:58 default-k8s-diff-port-151605 kubelet[784]: I1213 11:51:58.939378     784 scope.go:117] "RemoveContainer" containerID="fa8f51639ae63fa3b4f5e394f684c531467e95db08e68abfa9aa260a2e769949"
	Dec 13 11:51:59 default-k8s-diff-port-151605 kubelet[784]: I1213 11:51:59.141836     784 scope.go:117] "RemoveContainer" containerID="fa8f51639ae63fa3b4f5e394f684c531467e95db08e68abfa9aa260a2e769949"
	Dec 13 11:51:59 default-k8s-diff-port-151605 kubelet[784]: I1213 11:51:59.143671     784 scope.go:117] "RemoveContainer" containerID="bfa9d999fddf9ef31c4e35a493c5e2700f9f33c2f1d8e506c0d15b86f7760d06"
	Dec 13 11:51:59 default-k8s-diff-port-151605 kubelet[784]: E1213 11:51:59.146236     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kcvpc_kubernetes-dashboard(29e66af1-6135-495a-9b05-318b350b1ca2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kcvpc" podUID="29e66af1-6135-495a-9b05-318b350b1ca2"
	Dec 13 11:52:07 default-k8s-diff-port-151605 kubelet[784]: I1213 11:52:07.434579     784 scope.go:117] "RemoveContainer" containerID="bfa9d999fddf9ef31c4e35a493c5e2700f9f33c2f1d8e506c0d15b86f7760d06"
	Dec 13 11:52:07 default-k8s-diff-port-151605 kubelet[784]: E1213 11:52:07.434767     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kcvpc_kubernetes-dashboard(29e66af1-6135-495a-9b05-318b350b1ca2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kcvpc" podUID="29e66af1-6135-495a-9b05-318b350b1ca2"
	Dec 13 11:52:12 default-k8s-diff-port-151605 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 11:52:12 default-k8s-diff-port-151605 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 11:52:12 default-k8s-diff-port-151605 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [135099d7b9d603df26fd1321cf6285c1801fdd808e45650834598b050c12ba25] <==
	2025/12/13 11:51:31 Using namespace: kubernetes-dashboard
	2025/12/13 11:51:31 Using in-cluster config to connect to apiserver
	2025/12/13 11:51:31 Using secret token for csrf signing
	2025/12/13 11:51:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 11:51:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 11:51:31 Successful initial request to the apiserver, version: v1.34.2
	2025/12/13 11:51:31 Generating JWE encryption key
	2025/12/13 11:51:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 11:51:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 11:51:32 Initializing JWE encryption key from synchronized object
	2025/12/13 11:51:32 Creating in-cluster Sidecar client
	2025/12/13 11:51:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 11:51:32 Serving insecurely on HTTP port: 9090
	2025/12/13 11:52:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 11:51:31 Starting overwatch
	
	
	==> storage-provisioner [8674225d875f75c6ce5f382b8e3fdc88bd212b7abc69bd7a39f03cdf100ec6fc] <==
	I1213 11:51:53.213684       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 11:51:53.236300       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 11:51:53.236466       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 11:51:53.240174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:51:56.695189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:00.955734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:04.554084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:07.608020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:10.630557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:10.635220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 11:52:10.635372       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 11:52:10.635620       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-151605_76d8a3f0-91aa-455e-9ab2-1246e0fb28cd!
	I1213 11:52:10.636324       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6471950e-eece-40e8-8a15-868fd2831bde", APIVersion:"v1", ResourceVersion:"678", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-151605_76d8a3f0-91aa-455e-9ab2-1246e0fb28cd became leader
	W1213 11:52:10.638935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:10.644195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 11:52:10.736100       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-151605_76d8a3f0-91aa-455e-9ab2-1246e0fb28cd!
	W1213 11:52:12.647902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:12.655677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:14.658745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:14.663275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fba9365f141d5c048e73de3c4b23b2c1a27c25daee983fc11dd819f2303586c1] <==
	I1213 11:51:22.542693       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 11:51:52.561932       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-151605 -n default-k8s-diff-port-151605
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-151605 -n default-k8s-diff-port-151605: exit status 2 (374.104785ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-151605 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-151605
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-151605:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d",
	        "Created": "2025-12-13T11:49:50.135294946Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 597510,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:51:08.101502611Z",
	            "FinishedAt": "2025-12-13T11:51:07.236618995Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d/hosts",
	        "LogPath": "/var/lib/docker/containers/ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d/ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d-json.log",
	        "Name": "/default-k8s-diff-port-151605",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-151605:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-151605",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d",
	                "LowerDir": "/var/lib/docker/overlay2/5071f61a5ba74b8c26a46a195fc7ce2d5b47f49b801b792da027543cd1611276-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5071f61a5ba74b8c26a46a195fc7ce2d5b47f49b801b792da027543cd1611276/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5071f61a5ba74b8c26a46a195fc7ce2d5b47f49b801b792da027543cd1611276/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5071f61a5ba74b8c26a46a195fc7ce2d5b47f49b801b792da027543cd1611276/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-151605",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-151605/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-151605",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-151605",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-151605",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2ceb56575a00733dc15b62cf1f232e3cf32d78f9b6471db710187037ab35ab0e",
	            "SandboxKey": "/var/run/docker/netns/2ceb56575a00",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-151605": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:37:aa:59:76:dd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e01ba379a4d94c8de18912da562a485bb057ae2af70e58b76f1547550548184",
	                    "EndpointID": "144b78028dfa5749831619eb1cb4c4cbad0ba48c6675c0a36d0dd225e8e6321b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-151605",
	                        "ed91f41ddcee"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-151605 -n default-k8s-diff-port-151605
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-151605 -n default-k8s-diff-port-151605: exit status 2 (393.931893ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-151605 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-151605 logs -n 25: (1.345294724s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-522461 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-522461          │ jenkins │ v1.37.0 │ 13 Dec 25 11:47 UTC │ 13 Dec 25 11:47 UTC │
	│ delete  │ -p cert-options-522461                                                                                                                                                                                                                        │ cert-options-522461          │ jenkins │ v1.37.0 │ 13 Dec 25 11:47 UTC │ 13 Dec 25 11:47 UTC │
	│ start   │ -p old-k8s-version-051699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:47 UTC │ 13 Dec 25 11:48 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-051699 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │                     │
	│ stop    │ -p old-k8s-version-051699 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │ 13 Dec 25 11:48 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-051699 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │ 13 Dec 25 11:48 UTC │
	│ start   │ -p old-k8s-version-051699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │ 13 Dec 25 11:49 UTC │
	│ image   │ old-k8s-version-051699 image list --format=json                                                                                                                                                                                               │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ pause   │ -p old-k8s-version-051699 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │                     │
	│ delete  │ -p old-k8s-version-051699                                                                                                                                                                                                                     │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ delete  │ -p old-k8s-version-051699                                                                                                                                                                                                                     │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p cert-expiration-420007 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ delete  │ -p cert-expiration-420007                                                                                                                                                                                                                     │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-151605 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-151605 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-151605 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p embed-certs-326948 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │                     │
	│ stop    │ -p embed-certs-326948 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-326948 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │                     │
	│ image   │ default-k8s-diff-port-151605 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p default-k8s-diff-port-151605 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:51:32
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:51:32.984818  600084 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:51:32.985030  600084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:51:32.985057  600084 out.go:374] Setting ErrFile to fd 2...
	I1213 11:51:32.985074  600084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:51:32.985355  600084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:51:32.985789  600084 out.go:368] Setting JSON to false
	I1213 11:51:32.987378  600084 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12845,"bootTime":1765613848,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:51:32.987477  600084 start.go:143] virtualization:  
	I1213 11:51:32.992143  600084 out.go:179] * [embed-certs-326948] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:51:32.995397  600084 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:51:32.995477  600084 notify.go:221] Checking for updates...
	I1213 11:51:32.999132  600084 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:51:33.002659  600084 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:51:33.005858  600084 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:51:33.008906  600084 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:51:33.011815  600084 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:51:33.015232  600084 config.go:182] Loaded profile config "embed-certs-326948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:51:33.015855  600084 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:51:33.049060  600084 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:51:33.049189  600084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:51:33.164743  600084 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 11:51:33.152700014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:51:33.164855  600084 docker.go:319] overlay module found
	I1213 11:51:33.168098  600084 out.go:179] * Using the docker driver based on existing profile
	I1213 11:51:33.171115  600084 start.go:309] selected driver: docker
	I1213 11:51:33.171149  600084 start.go:927] validating driver "docker" against &{Name:embed-certs-326948 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-326948 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:33.171251  600084 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:51:33.172155  600084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:51:33.267963  600084 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 11:51:33.255540412 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:51:33.268293  600084 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:51:33.268316  600084 cni.go:84] Creating CNI manager for ""
	I1213 11:51:33.268363  600084 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:51:33.268399  600084 start.go:353] cluster config:
	{Name:embed-certs-326948 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-326948 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:33.271742  600084 out.go:179] * Starting "embed-certs-326948" primary control-plane node in "embed-certs-326948" cluster
	I1213 11:51:33.274786  600084 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:51:33.277765  600084 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:51:33.280645  600084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 11:51:33.280712  600084 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 11:51:33.280722  600084 cache.go:65] Caching tarball of preloaded images
	I1213 11:51:33.280822  600084 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 11:51:33.280831  600084 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 11:51:33.280950  600084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/config.json ...
	I1213 11:51:33.281156  600084 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:51:33.316515  600084 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:51:33.316534  600084 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:51:33.316548  600084 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:51:33.316578  600084 start.go:360] acquireMachinesLock for embed-certs-326948: {Name:mk006cdb726d13b418884982bd33ef960e248469 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:51:33.316632  600084 start.go:364] duration metric: took 33.814µs to acquireMachinesLock for "embed-certs-326948"
	I1213 11:51:33.316650  600084 start.go:96] Skipping create...Using existing machine configuration
	I1213 11:51:33.316655  600084 fix.go:54] fixHost starting: 
	I1213 11:51:33.316919  600084 cli_runner.go:164] Run: docker container inspect embed-certs-326948 --format={{.State.Status}}
	I1213 11:51:33.334973  600084 fix.go:112] recreateIfNeeded on embed-certs-326948: state=Stopped err=<nil>
	W1213 11:51:33.335005  600084 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 11:51:34.989511  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	W1213 11:51:36.990627  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	I1213 11:51:33.338472  600084 out.go:252] * Restarting existing docker container for "embed-certs-326948" ...
	I1213 11:51:33.338560  600084 cli_runner.go:164] Run: docker start embed-certs-326948
	I1213 11:51:33.674634  600084 cli_runner.go:164] Run: docker container inspect embed-certs-326948 --format={{.State.Status}}
	I1213 11:51:33.699762  600084 kic.go:430] container "embed-certs-326948" state is running.
	I1213 11:51:33.700344  600084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-326948
	I1213 11:51:33.732990  600084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/config.json ...
	I1213 11:51:33.733226  600084 machine.go:94] provisionDockerMachine start ...
	I1213 11:51:33.733307  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:33.764596  600084 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:33.764923  600084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1213 11:51:33.764942  600084 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:51:33.768998  600084 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 11:51:36.919267  600084 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-326948
	
	I1213 11:51:36.919296  600084 ubuntu.go:182] provisioning hostname "embed-certs-326948"
	I1213 11:51:36.919366  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:36.937883  600084 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:36.938211  600084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1213 11:51:36.938229  600084 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-326948 && echo "embed-certs-326948" | sudo tee /etc/hostname
	I1213 11:51:37.106551  600084 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-326948
	
	I1213 11:51:37.106625  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:37.124749  600084 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:37.125076  600084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1213 11:51:37.125096  600084 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-326948' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-326948/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-326948' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:51:37.275900  600084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:51:37.275930  600084 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 11:51:37.275965  600084 ubuntu.go:190] setting up certificates
	I1213 11:51:37.275981  600084 provision.go:84] configureAuth start
	I1213 11:51:37.276044  600084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-326948
	I1213 11:51:37.293863  600084 provision.go:143] copyHostCerts
	I1213 11:51:37.293949  600084 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 11:51:37.293959  600084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 11:51:37.294040  600084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 11:51:37.294179  600084 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 11:51:37.294192  600084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 11:51:37.294224  600084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 11:51:37.294287  600084 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 11:51:37.294293  600084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 11:51:37.294321  600084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 11:51:37.294384  600084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.embed-certs-326948 san=[127.0.0.1 192.168.76.2 embed-certs-326948 localhost minikube]
	I1213 11:51:37.679084  600084 provision.go:177] copyRemoteCerts
	I1213 11:51:37.679161  600084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:51:37.679200  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:37.697046  600084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:51:37.807285  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:51:37.825098  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 11:51:37.842815  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:51:37.861334  600084 provision.go:87] duration metric: took 585.3379ms to configureAuth
	I1213 11:51:37.861362  600084 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:51:37.861568  600084 config.go:182] Loaded profile config "embed-certs-326948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:51:37.861675  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:37.878716  600084 main.go:143] libmachine: Using SSH client type: native
	I1213 11:51:37.879040  600084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1213 11:51:37.879059  600084 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:51:38.304104  600084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:51:38.304127  600084 machine.go:97] duration metric: took 4.570880668s to provisionDockerMachine
	I1213 11:51:38.304139  600084 start.go:293] postStartSetup for "embed-certs-326948" (driver="docker")
	I1213 11:51:38.304150  600084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:51:38.304209  600084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:51:38.304248  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:38.326704  600084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:51:38.431776  600084 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:51:38.435463  600084 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:51:38.435494  600084 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:51:38.435539  600084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 11:51:38.435620  600084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 11:51:38.435732  600084 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 11:51:38.435840  600084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:51:38.443915  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:51:38.462413  600084 start.go:296] duration metric: took 158.257898ms for postStartSetup
	I1213 11:51:38.462524  600084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:51:38.462577  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:38.479236  600084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:51:38.580880  600084 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:51:38.585843  600084 fix.go:56] duration metric: took 5.269178309s for fixHost
	I1213 11:51:38.585868  600084 start.go:83] releasing machines lock for "embed-certs-326948", held for 5.269228287s
	I1213 11:51:38.585943  600084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-326948
	I1213 11:51:38.602941  600084 ssh_runner.go:195] Run: cat /version.json
	I1213 11:51:38.603003  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:38.603335  600084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:51:38.603387  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:38.621757  600084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:51:38.623198  600084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:51:38.820455  600084 ssh_runner.go:195] Run: systemctl --version
	I1213 11:51:38.827256  600084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:51:38.865114  600084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:51:38.869764  600084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:51:38.869893  600084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:51:38.877975  600084 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 11:51:38.878002  600084 start.go:496] detecting cgroup driver to use...
	I1213 11:51:38.878050  600084 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:51:38.878120  600084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:51:38.893847  600084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:51:38.907331  600084 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:51:38.907409  600084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:51:38.924809  600084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:51:38.939709  600084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:51:39.066898  600084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:51:39.196417  600084 docker.go:234] disabling docker service ...
	I1213 11:51:39.196490  600084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:51:39.211616  600084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:51:39.224798  600084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:51:39.350599  600084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:51:39.473229  600084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:51:39.492626  600084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:51:39.507486  600084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 11:51:39.507624  600084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:51:39.517765  600084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:51:39.517846  600084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:51:39.527583  600084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:51:39.536761  600084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:51:39.546876  600084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:51:39.556061  600084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:51:39.565849  600084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:51:39.574794  600084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:51:39.584606  600084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:51:39.592334  600084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:51:39.599849  600084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:39.725768  600084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:51:39.955845  600084 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:51:39.955919  600084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:51:39.959961  600084 start.go:564] Will wait 60s for crictl version
	I1213 11:51:39.960078  600084 ssh_runner.go:195] Run: which crictl
	I1213 11:51:39.964208  600084 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:51:39.992782  600084 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:51:39.992945  600084 ssh_runner.go:195] Run: crio --version
	I1213 11:51:40.033907  600084 ssh_runner.go:195] Run: crio --version
	I1213 11:51:40.067823  600084 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 11:51:40.070931  600084 cli_runner.go:164] Run: docker network inspect embed-certs-326948 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:51:40.089047  600084 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 11:51:40.093885  600084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:51:40.104954  600084 kubeadm.go:884] updating cluster {Name:embed-certs-326948 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-326948 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:51:40.105077  600084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 11:51:40.105149  600084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:51:40.144948  600084 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:51:40.144974  600084 crio.go:433] Images already preloaded, skipping extraction
	I1213 11:51:40.145031  600084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:51:40.174783  600084 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:51:40.174861  600084 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:51:40.174883  600084 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1213 11:51:40.175010  600084 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-326948 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-326948 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:51:40.175134  600084 ssh_runner.go:195] Run: crio config
	I1213 11:51:40.240434  600084 cni.go:84] Creating CNI manager for ""
	I1213 11:51:40.240460  600084 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:51:40.240504  600084 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:51:40.240534  600084 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-326948 NodeName:embed-certs-326948 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:51:40.240692  600084 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-326948"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:51:40.240768  600084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 11:51:40.249060  600084 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:51:40.249132  600084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:51:40.256947  600084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1213 11:51:40.272622  600084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 11:51:40.290341  600084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1213 11:51:40.303810  600084 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:51:40.307420  600084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:51:40.318013  600084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:40.435145  600084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:51:40.453384  600084 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948 for IP: 192.168.76.2
	I1213 11:51:40.453407  600084 certs.go:195] generating shared ca certs ...
	I1213 11:51:40.453423  600084 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:40.453556  600084 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:51:40.453613  600084 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:51:40.453625  600084 certs.go:257] generating profile certs ...
	I1213 11:51:40.453718  600084 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/client.key
	I1213 11:51:40.453788  600084 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/apiserver.key.dff061d2
	I1213 11:51:40.453841  600084 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/proxy-client.key
	I1213 11:51:40.453974  600084 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:51:40.454014  600084 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:51:40.454026  600084 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:51:40.454057  600084 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:51:40.454102  600084 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:51:40.454130  600084 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:51:40.454192  600084 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:51:40.454787  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:51:40.479260  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:51:40.519669  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:51:40.544142  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:51:40.567634  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1213 11:51:40.588503  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:51:40.608743  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:51:40.648222  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/embed-certs-326948/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:51:40.672215  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:51:40.697478  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:51:40.727586  600084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:51:40.748998  600084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:51:40.762826  600084 ssh_runner.go:195] Run: openssl version
	I1213 11:51:40.769231  600084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:51:40.776982  600084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:51:40.788856  600084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:51:40.793192  600084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:51:40.793280  600084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:51:40.834721  600084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:51:40.842495  600084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:40.849954  600084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:51:40.857688  600084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:40.861534  600084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:40.861604  600084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:51:40.904007  600084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:51:40.911605  600084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:51:40.919642  600084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:51:40.927261  600084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:51:40.931257  600084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:51:40.931324  600084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:51:40.972944  600084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:51:40.981040  600084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:51:40.986918  600084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 11:51:41.029892  600084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 11:51:41.071287  600084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 11:51:41.117961  600084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 11:51:41.172175  600084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 11:51:41.214776  600084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 11:51:41.271378  600084 kubeadm.go:401] StartCluster: {Name:embed-certs-326948 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-326948 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:51:41.271466  600084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:51:41.271610  600084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:51:41.347399  600084 cri.go:89] found id: "2f0d882fac60f1616055bed06c1f6058d2f4d9771c371fa9e130d01762278744"
	I1213 11:51:41.347422  600084 cri.go:89] found id: "5fa45fd0696ef89615d1d81b1bf2769d38c87713975e43422c105cb0d61cfdaa"
	I1213 11:51:41.347432  600084 cri.go:89] found id: "cb833c8e8af6645f23e9e2891cd88798a8d4211065330a18962b7d19db79c7ba"
	I1213 11:51:41.347436  600084 cri.go:89] found id: ""
	I1213 11:51:41.347488  600084 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 11:51:41.370430  600084 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:51:41Z" level=error msg="open /run/runc: no such file or directory"
	I1213 11:51:41.370523  600084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:51:41.387432  600084 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 11:51:41.387453  600084 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 11:51:41.387504  600084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 11:51:41.404349  600084 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 11:51:41.404951  600084 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-326948" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:51:41.405227  600084 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-354468/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-326948" cluster setting kubeconfig missing "embed-certs-326948" context setting]
	I1213 11:51:41.405707  600084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:41.407160  600084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 11:51:41.422863  600084 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1213 11:51:41.422893  600084 kubeadm.go:602] duration metric: took 35.433698ms to restartPrimaryControlPlane
	I1213 11:51:41.422904  600084 kubeadm.go:403] duration metric: took 151.538382ms to StartCluster
	I1213 11:51:41.422919  600084 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:41.422991  600084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:51:41.424281  600084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:51:41.424506  600084 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:51:41.424796  600084 config.go:182] Loaded profile config "embed-certs-326948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:51:41.424843  600084 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 11:51:41.424909  600084 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-326948"
	I1213 11:51:41.424922  600084 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-326948"
	W1213 11:51:41.424933  600084 addons.go:248] addon storage-provisioner should already be in state true
	I1213 11:51:41.424954  600084 host.go:66] Checking if "embed-certs-326948" exists ...
	I1213 11:51:41.425388  600084 cli_runner.go:164] Run: docker container inspect embed-certs-326948 --format={{.State.Status}}
	I1213 11:51:41.425662  600084 addons.go:70] Setting dashboard=true in profile "embed-certs-326948"
	I1213 11:51:41.425688  600084 addons.go:239] Setting addon dashboard=true in "embed-certs-326948"
	W1213 11:51:41.425695  600084 addons.go:248] addon dashboard should already be in state true
	I1213 11:51:41.425719  600084 host.go:66] Checking if "embed-certs-326948" exists ...
	I1213 11:51:41.426172  600084 cli_runner.go:164] Run: docker container inspect embed-certs-326948 --format={{.State.Status}}
	I1213 11:51:41.426573  600084 addons.go:70] Setting default-storageclass=true in profile "embed-certs-326948"
	I1213 11:51:41.426594  600084 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-326948"
	I1213 11:51:41.426864  600084 cli_runner.go:164] Run: docker container inspect embed-certs-326948 --format={{.State.Status}}
	I1213 11:51:41.429153  600084 out.go:179] * Verifying Kubernetes components...
	I1213 11:51:41.432599  600084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:51:41.473852  600084 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:51:41.481581  600084 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:51:41.481610  600084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 11:51:41.481677  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:41.491460  600084 addons.go:239] Setting addon default-storageclass=true in "embed-certs-326948"
	W1213 11:51:41.491487  600084 addons.go:248] addon default-storageclass should already be in state true
	I1213 11:51:41.491659  600084 host.go:66] Checking if "embed-certs-326948" exists ...
	I1213 11:51:41.492109  600084 cli_runner.go:164] Run: docker container inspect embed-certs-326948 --format={{.State.Status}}
	I1213 11:51:41.499858  600084 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 11:51:41.507260  600084 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1213 11:51:39.490140  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	W1213 11:51:41.493670  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	I1213 11:51:41.512405  600084 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 11:51:41.512434  600084 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 11:51:41.512504  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:41.530771  600084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:51:41.543703  600084 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:41.543723  600084 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 11:51:41.543794  600084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:51:41.565479  600084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:51:41.597329  600084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:51:41.747728  600084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 11:51:41.787611  600084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:51:41.804414  600084 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 11:51:41.804439  600084 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 11:51:41.862761  600084 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 11:51:41.862793  600084 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 11:51:41.892332  600084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 11:51:41.942039  600084 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 11:51:41.942073  600084 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 11:51:42.049866  600084 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 11:51:42.049890  600084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 11:51:42.105809  600084 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 11:51:42.105838  600084 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 11:51:42.137871  600084 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 11:51:42.137951  600084 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 11:51:42.161626  600084 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 11:51:42.161724  600084 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 11:51:42.189524  600084 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 11:51:42.189609  600084 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 11:51:42.228273  600084 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 11:51:42.228301  600084 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 11:51:42.252680  600084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 11:51:43.990581  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	W1213 11:51:45.990825  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	I1213 11:51:47.845489  600084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.097726855s)
	I1213 11:51:47.845551  600084 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.057871666s)
	I1213 11:51:47.845574  600084 node_ready.go:35] waiting up to 6m0s for node "embed-certs-326948" to be "Ready" ...
	I1213 11:51:47.845894  600084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.953538331s)
	I1213 11:51:47.846179  600084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.593462445s)
	I1213 11:51:47.849424  600084 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-326948 addons enable metrics-server
	
	I1213 11:51:47.877363  600084 node_ready.go:49] node "embed-certs-326948" is "Ready"
	I1213 11:51:47.877395  600084 node_ready.go:38] duration metric: took 31.802943ms for node "embed-certs-326948" to be "Ready" ...
	I1213 11:51:47.877410  600084 api_server.go:52] waiting for apiserver process to appear ...
	I1213 11:51:47.877470  600084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:51:47.886608  600084 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1213 11:51:47.889531  600084 addons.go:530] duration metric: took 6.464674737s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1213 11:51:47.906029  600084 api_server.go:72] duration metric: took 6.481484859s to wait for apiserver process to appear ...
	I1213 11:51:47.906123  600084 api_server.go:88] waiting for apiserver healthz status ...
	I1213 11:51:47.906159  600084 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 11:51:47.914925  600084 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1213 11:51:47.916581  600084 api_server.go:141] control plane version: v1.34.2
	I1213 11:51:47.916644  600084 api_server.go:131] duration metric: took 10.500339ms to wait for apiserver health ...
	I1213 11:51:47.916678  600084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 11:51:47.919944  600084 system_pods.go:59] 8 kube-system pods found
	I1213 11:51:47.920026  600084 system_pods.go:61] "coredns-66bc5c9577-459p2" [3b2fae9b-e0bd-4506-84c9-4385a6c2997c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:51:47.920053  600084 system_pods.go:61] "etcd-embed-certs-326948" [520e544b-4ca6-411f-927a-867164c6ae12] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 11:51:47.920091  600084 system_pods.go:61] "kindnet-q82mh" [2861cef6-0bd3-400e-ad74-ce89a58a69eb] Running
	I1213 11:51:47.920118  600084 system_pods.go:61] "kube-apiserver-embed-certs-326948" [e88d539d-e0f2-4396-a899-615d61945720] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 11:51:47.920142  600084 system_pods.go:61] "kube-controller-manager-embed-certs-326948" [61318a61-ad9f-4f1f-b1e7-1238077d0d53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 11:51:47.920182  600084 system_pods.go:61] "kube-proxy-5thrz" [b6f2714d-7089-4d6e-94ae-c0ec1ed42a46] Running
	I1213 11:51:47.920209  600084 system_pods.go:61] "kube-scheduler-embed-certs-326948" [8412daf2-f4c8-4870-a6e1-3a852d9c4929] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 11:51:47.920228  600084 system_pods.go:61] "storage-provisioner" [4cddf997-bcc1-4a6d-accf-779a6c4d1557] Running
	I1213 11:51:47.920265  600084 system_pods.go:74] duration metric: took 3.566329ms to wait for pod list to return data ...
	I1213 11:51:47.920291  600084 default_sa.go:34] waiting for default service account to be created ...
	I1213 11:51:47.923077  600084 default_sa.go:45] found service account: "default"
	I1213 11:51:47.923140  600084 default_sa.go:55] duration metric: took 2.821002ms for default service account to be created ...
	I1213 11:51:47.923164  600084 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 11:51:47.926500  600084 system_pods.go:86] 8 kube-system pods found
	I1213 11:51:47.926578  600084 system_pods.go:89] "coredns-66bc5c9577-459p2" [3b2fae9b-e0bd-4506-84c9-4385a6c2997c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 11:51:47.926600  600084 system_pods.go:89] "etcd-embed-certs-326948" [520e544b-4ca6-411f-927a-867164c6ae12] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 11:51:47.926638  600084 system_pods.go:89] "kindnet-q82mh" [2861cef6-0bd3-400e-ad74-ce89a58a69eb] Running
	I1213 11:51:47.926663  600084 system_pods.go:89] "kube-apiserver-embed-certs-326948" [e88d539d-e0f2-4396-a899-615d61945720] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 11:51:47.926685  600084 system_pods.go:89] "kube-controller-manager-embed-certs-326948" [61318a61-ad9f-4f1f-b1e7-1238077d0d53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 11:51:47.926722  600084 system_pods.go:89] "kube-proxy-5thrz" [b6f2714d-7089-4d6e-94ae-c0ec1ed42a46] Running
	I1213 11:51:47.926747  600084 system_pods.go:89] "kube-scheduler-embed-certs-326948" [8412daf2-f4c8-4870-a6e1-3a852d9c4929] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 11:51:47.926768  600084 system_pods.go:89] "storage-provisioner" [4cddf997-bcc1-4a6d-accf-779a6c4d1557] Running
	I1213 11:51:47.926804  600084 system_pods.go:126] duration metric: took 3.62274ms to wait for k8s-apps to be running ...
	I1213 11:51:47.926831  600084 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 11:51:47.926915  600084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:51:47.948025  600084 system_svc.go:56] duration metric: took 21.184819ms WaitForService to wait for kubelet
	I1213 11:51:47.948056  600084 kubeadm.go:587] duration metric: took 6.523516492s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:51:47.948075  600084 node_conditions.go:102] verifying NodePressure condition ...
	I1213 11:51:47.952400  600084 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 11:51:47.952434  600084 node_conditions.go:123] node cpu capacity is 2
	I1213 11:51:47.952446  600084 node_conditions.go:105] duration metric: took 4.33829ms to run NodePressure ...
	I1213 11:51:47.952467  600084 start.go:242] waiting for startup goroutines ...
	I1213 11:51:47.952476  600084 start.go:247] waiting for cluster config update ...
	I1213 11:51:47.952487  600084 start.go:256] writing updated cluster config ...
	I1213 11:51:47.952795  600084 ssh_runner.go:195] Run: rm -f paused
	I1213 11:51:47.957541  600084 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 11:51:47.962334  600084 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-459p2" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 11:51:48.490457  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	W1213 11:51:50.990468  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	W1213 11:51:49.994052  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	W1213 11:51:52.468704  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	W1213 11:51:53.489991  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	W1213 11:51:55.490272  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	W1213 11:51:57.490898  597382 pod_ready.go:104] pod "coredns-66bc5c9577-pr2h6" is not "Ready", error: <nil>
	W1213 11:51:54.469787  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	W1213 11:51:56.968982  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	I1213 11:51:58.490193  597382 pod_ready.go:94] pod "coredns-66bc5c9577-pr2h6" is "Ready"
	I1213 11:51:58.490223  597382 pod_ready.go:86] duration metric: took 35.006040895s for pod "coredns-66bc5c9577-pr2h6" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:58.493216  597382 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:58.498150  597382 pod_ready.go:94] pod "etcd-default-k8s-diff-port-151605" is "Ready"
	I1213 11:51:58.498179  597382 pod_ready.go:86] duration metric: took 4.934085ms for pod "etcd-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:58.500696  597382 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:58.506302  597382 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-151605" is "Ready"
	I1213 11:51:58.506330  597382 pod_ready.go:86] duration metric: took 5.605984ms for pod "kube-apiserver-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:58.508966  597382 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:58.688031  597382 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-151605" is "Ready"
	I1213 11:51:58.688059  597382 pod_ready.go:86] duration metric: took 179.066421ms for pod "kube-controller-manager-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:58.888161  597382 pod_ready.go:83] waiting for pod "kube-proxy-7sl78" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:59.289184  597382 pod_ready.go:94] pod "kube-proxy-7sl78" is "Ready"
	I1213 11:51:59.289209  597382 pod_ready.go:86] duration metric: took 401.020443ms for pod "kube-proxy-7sl78" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:59.488774  597382 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:59.888078  597382 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-151605" is "Ready"
	I1213 11:51:59.888110  597382 pod_ready.go:86] duration metric: took 399.308972ms for pod "kube-scheduler-default-k8s-diff-port-151605" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 11:51:59.888123  597382 pod_ready.go:40] duration metric: took 36.478953164s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 11:51:59.943127  597382 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1213 11:51:59.946349  597382 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-151605" cluster and "default" namespace by default
	W1213 11:51:58.971675  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	W1213 11:52:01.470668  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	W1213 11:52:03.968114  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	W1213 11:52:05.968529  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	W1213 11:52:08.467468  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	W1213 11:52:10.468103  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	W1213 11:52:12.468238  600084 pod_ready.go:104] pod "coredns-66bc5c9577-459p2" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 13 11:51:58 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:51:58.944057988Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:51:58 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:51:58.955187149Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:51:58 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:51:58.958968856Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:51:58 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:51:58.988010237Z" level=info msg="Created container bfa9d999fddf9ef31c4e35a493c5e2700f9f33c2f1d8e506c0d15b86f7760d06: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kcvpc/dashboard-metrics-scraper" id=b5ecc76f-fcf5-4540-9d63-5cc2cb959f85 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:51:58 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:51:58.990598983Z" level=info msg="Starting container: bfa9d999fddf9ef31c4e35a493c5e2700f9f33c2f1d8e506c0d15b86f7760d06" id=43f8e72a-6c49-4e0e-aabe-2726725aee8f name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:51:58 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:51:58.994152315Z" level=info msg="Started container" PID=1669 containerID=bfa9d999fddf9ef31c4e35a493c5e2700f9f33c2f1d8e506c0d15b86f7760d06 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kcvpc/dashboard-metrics-scraper id=43f8e72a-6c49-4e0e-aabe-2726725aee8f name=/runtime.v1.RuntimeService/StartContainer sandboxID=fe3adba33adf011e45bf7d39801cc9546c49c46b9050676a796e5d95afd195d8
	Dec 13 11:51:58 default-k8s-diff-port-151605 conmon[1667]: conmon bfa9d999fddf9ef31c4e <ninfo>: container 1669 exited with status 1
	Dec 13 11:51:59 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:51:59.157922348Z" level=info msg="Removing container: fa8f51639ae63fa3b4f5e394f684c531467e95db08e68abfa9aa260a2e769949" id=1d2b5ccb-f331-420f-b737-f825a6ffac30 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 11:51:59 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:51:59.180275628Z" level=info msg="Error loading conmon cgroup of container fa8f51639ae63fa3b4f5e394f684c531467e95db08e68abfa9aa260a2e769949: cgroup deleted" id=1d2b5ccb-f331-420f-b737-f825a6ffac30 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 11:51:59 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:51:59.184709828Z" level=info msg="Removed container fa8f51639ae63fa3b4f5e394f684c531467e95db08e68abfa9aa260a2e769949: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kcvpc/dashboard-metrics-scraper" id=1d2b5ccb-f331-420f-b737-f825a6ffac30 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.828529863Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.832673796Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.832705304Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.83272768Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.83594034Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.835977026Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.835994864Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.839430419Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.839481053Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.83950159Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.847090186Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.847124935Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.847146958Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.850466885Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:52:02 default-k8s-diff-port-151605 crio[654]: time="2025-12-13T11:52:02.850503965Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	bfa9d999fddf9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago       Exited              dashboard-metrics-scraper   2                   fe3adba33adf0       dashboard-metrics-scraper-6ffb444bf9-kcvpc             kubernetes-dashboard
	8674225d875f7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   3720feecdda54       storage-provisioner                                    kube-system
	135099d7b9d60       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   46 seconds ago       Running             kubernetes-dashboard        0                   5e1a8e85bf7cb       kubernetes-dashboard-855c9754f9-2j5n9                  kubernetes-dashboard
	a55ead4e461e5       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   6d067a48860b9       busybox                                                default
	36769a73ca236       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   ebe97b7ec56d7       coredns-66bc5c9577-pr2h6                               kube-system
	73e76e8c891b9       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           55 seconds ago       Running             kindnet-cni                 1                   4520ca4e3fef1       kindnet-4bq9f                                          kube-system
	fba9365f141d5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   3720feecdda54       storage-provisioner                                    kube-system
	0f741173607eb       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                           55 seconds ago       Running             kube-proxy                  1                   e171d74fb2af3       kube-proxy-7sl78                                       kube-system
	cbd9d49b05b8a       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                           About a minute ago   Running             etcd                        1                   c5660b57e2577       etcd-default-k8s-diff-port-151605                      kube-system
	41f26b68d203d       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                           About a minute ago   Running             kube-apiserver              1                   3b92d362786a8       kube-apiserver-default-k8s-diff-port-151605            kube-system
	c6a26bd3f3f3a       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                           About a minute ago   Running             kube-controller-manager     1                   2b80b52319610       kube-controller-manager-default-k8s-diff-port-151605   kube-system
	54cffecfcbe7d       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                           About a minute ago   Running             kube-scheduler              1                   63a752d725e7c       kube-scheduler-default-k8s-diff-port-151605            kube-system
	
	
	==> coredns [36769a73ca236b1b9aa92ad718f5b335f85d3c1cb3912e1f3fbf541b2764e758] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53827 - 44965 "HINFO IN 7010563107646899880.874189890595753488. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.054452492s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-151605
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-151605
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
	                    minikube.k8s.io/name=default-k8s-diff-port-151605
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T11_50_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 11:50:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-151605
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 11:52:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 11:51:52 +0000   Sat, 13 Dec 2025 11:50:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 11:51:52 +0000   Sat, 13 Dec 2025 11:50:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 11:51:52 +0000   Sat, 13 Dec 2025 11:50:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 11:51:52 +0000   Sat, 13 Dec 2025 11:50:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-151605
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                a385de42-c8e0-4943-b893-df4c54e93d41
	  Boot ID:                    9bd24839-35d9-4392-a0e0-b2e0b9823eaa
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-pr2h6                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-default-k8s-diff-port-151605                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         116s
	  kube-system                 kindnet-4bq9f                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-default-k8s-diff-port-151605             250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-151605    200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-7sl78                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-default-k8s-diff-port-151605             100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-kcvpc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2j5n9                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 108s                 kube-proxy       
	  Normal   Starting                 54s                  kube-proxy       
	  Warning  CgroupV1                 2m6s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m6s (x8 over 2m6s)  kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  116s                 kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 116s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    116s                 kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     116s                 kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasSufficientPID
	  Normal   Starting                 116s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           112s                 node-controller  Node default-k8s-diff-port-151605 event: Registered Node default-k8s-diff-port-151605 in Controller
	  Normal   NodeReady                96s                  kubelet          Node default-k8s-diff-port-151605 status is now: NodeReady
	  Normal   Starting                 63s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 63s)    kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 63s)    kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 63s)    kubelet          Node default-k8s-diff-port-151605 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                  node-controller  Node default-k8s-diff-port-151605 event: Registered Node default-k8s-diff-port-151605 in Controller
	
	
	==> dmesg <==
	[Dec13 11:21] overlayfs: idmapped layers are currently not supported
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	[Dec13 11:46] overlayfs: idmapped layers are currently not supported
	[ +24.639766] overlayfs: idmapped layers are currently not supported
	[ +18.732422] overlayfs: idmapped layers are currently not supported
	[Dec13 11:47] overlayfs: idmapped layers are currently not supported
	[Dec13 11:48] overlayfs: idmapped layers are currently not supported
	[Dec13 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.618483] overlayfs: idmapped layers are currently not supported
	[Dec13 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.749488] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [cbd9d49b05b8a5dd0dc77bf63238bdf30ee239621287d026e486c91a38c69194] <==
	{"level":"warn","ts":"2025-12-13T11:51:19.407165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.457328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.496259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.531073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.635634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.690691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.727390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.756994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.841684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.847769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.887161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.923927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.952456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:19.976332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.012557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.050115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.097100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.147826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.173050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.309263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.399169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.402948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.417718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.435842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:20.520551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54416","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:52:17 up  3:34,  0 user,  load average: 2.51, 2.66, 2.27
	Linux default-k8s-diff-port-151605 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [73e76e8c891b9ea190a1e89188926f8eb848c83067c164ab1b20f2f773b8aaff] <==
	I1213 11:51:22.620567       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 11:51:22.622354       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 11:51:22.622553       1 main.go:148] setting mtu 1500 for CNI 
	I1213 11:51:22.624414       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 11:51:22.624515       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T11:51:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 11:51:22.827597       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 11:51:22.827621       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 11:51:22.827636       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 11:51:22.827969       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1213 11:51:52.828502       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1213 11:51:52.828698       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1213 11:51:52.828801       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1213 11:51:52.828893       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1213 11:51:54.427849       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 11:51:54.427944       1 metrics.go:72] Registering metrics
	I1213 11:51:54.428040       1 controller.go:711] "Syncing nftables rules"
	I1213 11:52:02.827591       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:52:02.828266       1 main.go:301] handling current node
	I1213 11:52:12.827745       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 11:52:12.827786       1 main.go:301] handling current node
	
	
	==> kube-apiserver [41f26b68d203d9d83d81376bab5feea3fb613ac275331c49aa37fbebfa938c29] <==
	I1213 11:51:21.725858       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 11:51:21.759755       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1213 11:51:21.762965       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1213 11:51:21.772671       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1213 11:51:21.772722       1 policy_source.go:240] refreshing policies
	I1213 11:51:21.803156       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1213 11:51:21.803210       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 11:51:21.803616       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 11:51:21.803710       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 11:51:21.803717       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 11:51:21.808110       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 11:51:21.814871       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 11:51:21.823863       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 11:51:21.838278       1 cache.go:39] Caches are synced for autoregister controller
	I1213 11:51:22.015949       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 11:51:22.278191       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 11:51:22.885128       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 11:51:22.939762       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 11:51:22.978407       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 11:51:22.993095       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 11:51:23.125902       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.12.109"}
	I1213 11:51:23.143415       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.67.232"}
	I1213 11:51:24.875862       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 11:51:25.224913       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 11:51:25.345840       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c6a26bd3f3f3a9aadd06af1e7019a9a4ad95fe27fc8cd6cd2866891c0293ac91] <==
	I1213 11:51:24.777043       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1213 11:51:24.777078       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 11:51:24.776934       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 11:51:24.780777       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 11:51:24.786984       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1213 11:51:24.794257       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 11:51:24.798573       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 11:51:24.802948       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 11:51:24.804791       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 11:51:24.804810       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 11:51:24.804818       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 11:51:24.810033       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 11:51:24.810150       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 11:51:24.815171       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 11:51:24.815464       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 11:51:24.818750       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 11:51:24.818842       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1213 11:51:24.818865       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 11:51:24.818875       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 11:51:24.818889       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 11:51:24.818898       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 11:51:24.818907       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 11:51:24.821303       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1213 11:51:24.821395       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 11:51:24.828194       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [0f741173607eb6e99619529190004990e5a1a175b044f55053251c961fb0bcdc] <==
	I1213 11:51:22.848936       1 server_linux.go:53] "Using iptables proxy"
	I1213 11:51:23.031278       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 11:51:23.139803       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 11:51:23.139844       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1213 11:51:23.139920       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 11:51:23.193993       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 11:51:23.194129       1 server_linux.go:132] "Using iptables Proxier"
	I1213 11:51:23.198659       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 11:51:23.199058       1 server.go:527] "Version info" version="v1.34.2"
	I1213 11:51:23.199451       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:51:23.201030       1 config.go:200] "Starting service config controller"
	I1213 11:51:23.201086       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 11:51:23.201143       1 config.go:106] "Starting endpoint slice config controller"
	I1213 11:51:23.201185       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 11:51:23.201235       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 11:51:23.201268       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 11:51:23.202083       1 config.go:309] "Starting node config controller"
	I1213 11:51:23.202145       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 11:51:23.202187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 11:51:23.303098       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 11:51:23.303206       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 11:51:23.303233       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [54cffecfcbe7d79dd9b85c2aea28df92440fb375b7e38669ef73479908f14bd0] <==
	I1213 11:51:20.694480       1 serving.go:386] Generated self-signed cert in-memory
	I1213 11:51:22.121015       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 11:51:22.122797       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:51:22.141739       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1213 11:51:22.141787       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1213 11:51:22.141840       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:51:22.141854       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:51:22.141959       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 11:51:22.141967       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1213 11:51:22.142752       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 11:51:22.142901       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 11:51:22.245345       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1213 11:51:22.245451       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:51:22.246191       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Dec 13 11:51:26 default-k8s-diff-port-151605 kubelet[784]: E1213 11:51:26.536663     784 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Dec 13 11:51:26 default-k8s-diff-port-151605 kubelet[784]: E1213 11:51:26.536717     784 projected.go:196] Error preparing data for projected volume kube-api-access-2l7df for pod kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2j5n9: failed to sync configmap cache: timed out waiting for the condition
	Dec 13 11:51:26 default-k8s-diff-port-151605 kubelet[784]: E1213 11:51:26.536821     784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f4636eef-77b2-455c-a3f1-d90d2318c5ec-kube-api-access-2l7df podName:f4636eef-77b2-455c-a3f1-d90d2318c5ec nodeName:}" failed. No retries permitted until 2025-12-13 11:51:27.036794267 +0000 UTC m=+12.329948953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2l7df" (UniqueName: "kubernetes.io/projected/f4636eef-77b2-455c-a3f1-d90d2318c5ec-kube-api-access-2l7df") pod "kubernetes-dashboard-855c9754f9-2j5n9" (UID: "f4636eef-77b2-455c-a3f1-d90d2318c5ec") : failed to sync configmap cache: timed out waiting for the condition
	Dec 13 11:51:26 default-k8s-diff-port-151605 kubelet[784]: E1213 11:51:26.538834     784 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Dec 13 11:51:26 default-k8s-diff-port-151605 kubelet[784]: E1213 11:51:26.538878     784 projected.go:196] Error preparing data for projected volume kube-api-access-7ntcx for pod kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kcvpc: failed to sync configmap cache: timed out waiting for the condition
	Dec 13 11:51:26 default-k8s-diff-port-151605 kubelet[784]: E1213 11:51:26.538949     784 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29e66af1-6135-495a-9b05-318b350b1ca2-kube-api-access-7ntcx podName:29e66af1-6135-495a-9b05-318b350b1ca2 nodeName:}" failed. No retries permitted until 2025-12-13 11:51:27.038924081 +0000 UTC m=+12.332078766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7ntcx" (UniqueName: "kubernetes.io/projected/29e66af1-6135-495a-9b05-318b350b1ca2-kube-api-access-7ntcx") pod "dashboard-metrics-scraper-6ffb444bf9-kcvpc" (UID: "29e66af1-6135-495a-9b05-318b350b1ca2") : failed to sync configmap cache: timed out waiting for the condition
	Dec 13 11:51:27 default-k8s-diff-port-151605 kubelet[784]: W1213 11:51:27.458504     784 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/ed91f41ddceeea9c49d3cda5d1ac00c4e2120cece97309de48a78f9e1a53979d/crio-fe3adba33adf011e45bf7d39801cc9546c49c46b9050676a796e5d95afd195d8 WatchSource:0}: Error finding container fe3adba33adf011e45bf7d39801cc9546c49c46b9050676a796e5d95afd195d8: Status 404 returned error can't find the container with id fe3adba33adf011e45bf7d39801cc9546c49c46b9050676a796e5d95afd195d8
	Dec 13 11:51:28 default-k8s-diff-port-151605 kubelet[784]: I1213 11:51:28.039290     784 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 13 11:51:32 default-k8s-diff-port-151605 kubelet[784]: I1213 11:51:32.108612     784 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2j5n9" podStartSLOduration=2.9886922350000003 podStartE2EDuration="7.108596274s" podCreationTimestamp="2025-12-13 11:51:25 +0000 UTC" firstStartedPulling="2025-12-13 11:51:27.177488365 +0000 UTC m=+12.470643051" lastFinishedPulling="2025-12-13 11:51:31.297392404 +0000 UTC m=+16.590547090" observedRunningTime="2025-12-13 11:51:32.107998108 +0000 UTC m=+17.401152794" watchObservedRunningTime="2025-12-13 11:51:32.108596274 +0000 UTC m=+17.401750969"
	Dec 13 11:51:38 default-k8s-diff-port-151605 kubelet[784]: I1213 11:51:38.080356     784 scope.go:117] "RemoveContainer" containerID="971520c28fa5bd7c19e9d003a8329a396311c4af31a4188ef10cd318ae256eb5"
	Dec 13 11:51:39 default-k8s-diff-port-151605 kubelet[784]: I1213 11:51:39.084516     784 scope.go:117] "RemoveContainer" containerID="971520c28fa5bd7c19e9d003a8329a396311c4af31a4188ef10cd318ae256eb5"
	Dec 13 11:51:39 default-k8s-diff-port-151605 kubelet[784]: I1213 11:51:39.085498     784 scope.go:117] "RemoveContainer" containerID="fa8f51639ae63fa3b4f5e394f684c531467e95db08e68abfa9aa260a2e769949"
	Dec 13 11:51:39 default-k8s-diff-port-151605 kubelet[784]: E1213 11:51:39.085791     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kcvpc_kubernetes-dashboard(29e66af1-6135-495a-9b05-318b350b1ca2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kcvpc" podUID="29e66af1-6135-495a-9b05-318b350b1ca2"
	Dec 13 11:51:47 default-k8s-diff-port-151605 kubelet[784]: I1213 11:51:47.433963     784 scope.go:117] "RemoveContainer" containerID="fa8f51639ae63fa3b4f5e394f684c531467e95db08e68abfa9aa260a2e769949"
	Dec 13 11:51:47 default-k8s-diff-port-151605 kubelet[784]: E1213 11:51:47.434699     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kcvpc_kubernetes-dashboard(29e66af1-6135-495a-9b05-318b350b1ca2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kcvpc" podUID="29e66af1-6135-495a-9b05-318b350b1ca2"
	Dec 13 11:51:53 default-k8s-diff-port-151605 kubelet[784]: I1213 11:51:53.122770     784 scope.go:117] "RemoveContainer" containerID="fba9365f141d5c048e73de3c4b23b2c1a27c25daee983fc11dd819f2303586c1"
	Dec 13 11:51:58 default-k8s-diff-port-151605 kubelet[784]: I1213 11:51:58.939378     784 scope.go:117] "RemoveContainer" containerID="fa8f51639ae63fa3b4f5e394f684c531467e95db08e68abfa9aa260a2e769949"
	Dec 13 11:51:59 default-k8s-diff-port-151605 kubelet[784]: I1213 11:51:59.141836     784 scope.go:117] "RemoveContainer" containerID="fa8f51639ae63fa3b4f5e394f684c531467e95db08e68abfa9aa260a2e769949"
	Dec 13 11:51:59 default-k8s-diff-port-151605 kubelet[784]: I1213 11:51:59.143671     784 scope.go:117] "RemoveContainer" containerID="bfa9d999fddf9ef31c4e35a493c5e2700f9f33c2f1d8e506c0d15b86f7760d06"
	Dec 13 11:51:59 default-k8s-diff-port-151605 kubelet[784]: E1213 11:51:59.146236     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kcvpc_kubernetes-dashboard(29e66af1-6135-495a-9b05-318b350b1ca2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kcvpc" podUID="29e66af1-6135-495a-9b05-318b350b1ca2"
	Dec 13 11:52:07 default-k8s-diff-port-151605 kubelet[784]: I1213 11:52:07.434579     784 scope.go:117] "RemoveContainer" containerID="bfa9d999fddf9ef31c4e35a493c5e2700f9f33c2f1d8e506c0d15b86f7760d06"
	Dec 13 11:52:07 default-k8s-diff-port-151605 kubelet[784]: E1213 11:52:07.434767     784 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kcvpc_kubernetes-dashboard(29e66af1-6135-495a-9b05-318b350b1ca2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kcvpc" podUID="29e66af1-6135-495a-9b05-318b350b1ca2"
	Dec 13 11:52:12 default-k8s-diff-port-151605 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 11:52:12 default-k8s-diff-port-151605 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 11:52:12 default-k8s-diff-port-151605 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [135099d7b9d603df26fd1321cf6285c1801fdd808e45650834598b050c12ba25] <==
	2025/12/13 11:51:31 Using namespace: kubernetes-dashboard
	2025/12/13 11:51:31 Using in-cluster config to connect to apiserver
	2025/12/13 11:51:31 Using secret token for csrf signing
	2025/12/13 11:51:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 11:51:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 11:51:31 Successful initial request to the apiserver, version: v1.34.2
	2025/12/13 11:51:31 Generating JWE encryption key
	2025/12/13 11:51:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 11:51:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 11:51:32 Initializing JWE encryption key from synchronized object
	2025/12/13 11:51:32 Creating in-cluster Sidecar client
	2025/12/13 11:51:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 11:51:32 Serving insecurely on HTTP port: 9090
	2025/12/13 11:52:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 11:51:31 Starting overwatch
	
	
	==> storage-provisioner [8674225d875f75c6ce5f382b8e3fdc88bd212b7abc69bd7a39f03cdf100ec6fc] <==
	I1213 11:51:53.213684       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 11:51:53.236300       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 11:51:53.236466       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 11:51:53.240174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:51:56.695189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:00.955734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:04.554084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:07.608020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:10.630557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:10.635220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 11:52:10.635372       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 11:52:10.635620       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-151605_76d8a3f0-91aa-455e-9ab2-1246e0fb28cd!
	I1213 11:52:10.636324       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6471950e-eece-40e8-8a15-868fd2831bde", APIVersion:"v1", ResourceVersion:"678", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-151605_76d8a3f0-91aa-455e-9ab2-1246e0fb28cd became leader
	W1213 11:52:10.638935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:10.644195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 11:52:10.736100       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-151605_76d8a3f0-91aa-455e-9ab2-1246e0fb28cd!
	W1213 11:52:12.647902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:12.655677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:14.658745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:14.663275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:16.669747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:16.676968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fba9365f141d5c048e73de3c4b23b2c1a27c25daee983fc11dd819f2303586c1] <==
	I1213 11:51:22.542693       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 11:51:52.561932       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-151605 -n default-k8s-diff-port-151605
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-151605 -n default-k8s-diff-port-151605: exit status 2 (391.355958ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-151605 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (514.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m32.299043169s)

                                                
                                                
-- stdout --
	* [no-preload-307409] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-307409" primary control-plane node in "no-preload-307409" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:52:22.177878  603921 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:52:22.177999  603921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:22.178011  603921 out.go:374] Setting ErrFile to fd 2...
	I1213 11:52:22.178016  603921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:22.178255  603921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:52:22.178669  603921 out.go:368] Setting JSON to false
	I1213 11:52:22.179625  603921 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12895,"bootTime":1765613848,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:52:22.179698  603921 start.go:143] virtualization:  
	I1213 11:52:22.183759  603921 out.go:179] * [no-preload-307409] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:52:22.187220  603921 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:52:22.187293  603921 notify.go:221] Checking for updates...
	I1213 11:52:22.194687  603921 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:52:22.202302  603921 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:52:22.205231  603921 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:52:22.208078  603921 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:52:22.210961  603921 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:52:22.214458  603921 config.go:182] Loaded profile config "embed-certs-326948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:52:22.214574  603921 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:52:22.242903  603921 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:52:22.243027  603921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:52:22.310771  603921 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:52:22.30036342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:52:22.310879  603921 docker.go:319] overlay module found
	I1213 11:52:22.315971  603921 out.go:179] * Using the docker driver based on user configuration
	I1213 11:52:22.318784  603921 start.go:309] selected driver: docker
	I1213 11:52:22.318803  603921 start.go:927] validating driver "docker" against <nil>
	I1213 11:52:22.318817  603921 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:52:22.319579  603921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:52:22.380053  603921 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:52:22.371010804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:52:22.380204  603921 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 11:52:22.380437  603921 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:52:22.383329  603921 out.go:179] * Using Docker driver with root privileges
	I1213 11:52:22.386243  603921 cni.go:84] Creating CNI manager for ""
	I1213 11:52:22.386313  603921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:52:22.386327  603921 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 11:52:22.386408  603921 start.go:353] cluster config:
	{Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:52:22.389646  603921 out.go:179] * Starting "no-preload-307409" primary control-plane node in "no-preload-307409" cluster
	I1213 11:52:22.392478  603921 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:52:22.395420  603921 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:52:22.398416  603921 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:22.398505  603921 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:52:22.398545  603921 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/config.json ...
	I1213 11:52:22.398575  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/config.json: {Name:mkec3b7ed172f77da3b248fbbf20fa0dbee47daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:22.400508  603921 cache.go:107] acquiring lock: {Name:mkf4d74369c8245ecb55fb0e29b8225ca9f09ff5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.400655  603921 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 11:52:22.400685  603921 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.853487ms
	I1213 11:52:22.400708  603921 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 11:52:22.400731  603921 cache.go:107] acquiring lock: {Name:mkb6b336872403a4d868a5d769900fdf1066c1c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.401593  603921 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:22.402011  603921 cache.go:107] acquiring lock: {Name:mkafdfd911f389f1e02c51849a66241927a5c213 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.402185  603921 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:22.402473  603921 cache.go:107] acquiring lock: {Name:mk8f79409d2ca53ad062fcf0126f6980a6193bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.402632  603921 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:22.402788  603921 cache.go:107] acquiring lock: {Name:mk4ff965cf9ab0943f63cb9d5079b89d443629ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.402897  603921 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:22.403057  603921 cache.go:107] acquiring lock: {Name:mk2037397f0606151b65f1037a4650bdb91f57be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.403186  603921 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:22.403350  603921 cache.go:107] acquiring lock: {Name:mkcce925699bd9689e329c60f570e109b24fe773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.403414  603921 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1213 11:52:22.403426  603921 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 80.443µs
	I1213 11:52:22.403434  603921 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1213 11:52:22.403457  603921 cache.go:107] acquiring lock: {Name:mk7409e8a480c483310652cd8f23d5f9940a03a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.403493  603921 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1213 11:52:22.403502  603921 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 54.286µs
	I1213 11:52:22.403549  603921 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1213 11:52:22.405169  603921 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:22.405591  603921 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:22.406004  603921 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:22.406392  603921 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:22.406763  603921 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:22.423280  603921 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:52:22.423306  603921 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:52:22.423321  603921 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:52:22.423351  603921 start.go:360] acquireMachinesLock for no-preload-307409: {Name:mk5b591d9d6f446a65ecf56605831e84fbfd4c88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.423450  603921 start.go:364] duration metric: took 84.382µs to acquireMachinesLock for "no-preload-307409"
	I1213 11:52:22.423480  603921 start.go:93] Provisioning new machine with config: &{Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:52:22.423661  603921 start.go:125] createHost starting for "" (driver="docker")
	I1213 11:52:22.429079  603921 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 11:52:22.429336  603921 start.go:159] libmachine.API.Create for "no-preload-307409" (driver="docker")
	I1213 11:52:22.429376  603921 client.go:173] LocalClient.Create starting
	I1213 11:52:22.429452  603921 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem
	I1213 11:52:22.429493  603921 main.go:143] libmachine: Decoding PEM data...
	I1213 11:52:22.429513  603921 main.go:143] libmachine: Parsing certificate...
	I1213 11:52:22.429576  603921 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem
	I1213 11:52:22.429646  603921 main.go:143] libmachine: Decoding PEM data...
	I1213 11:52:22.429666  603921 main.go:143] libmachine: Parsing certificate...
	I1213 11:52:22.430121  603921 cli_runner.go:164] Run: docker network inspect no-preload-307409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 11:52:22.448911  603921 cli_runner.go:211] docker network inspect no-preload-307409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 11:52:22.448997  603921 network_create.go:284] running [docker network inspect no-preload-307409] to gather additional debugging logs...
	I1213 11:52:22.449017  603921 cli_runner.go:164] Run: docker network inspect no-preload-307409
	W1213 11:52:22.468248  603921 cli_runner.go:211] docker network inspect no-preload-307409 returned with exit code 1
	I1213 11:52:22.468284  603921 network_create.go:287] error running [docker network inspect no-preload-307409]: docker network inspect no-preload-307409: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-307409 not found
	I1213 11:52:22.468303  603921 network_create.go:289] output of [docker network inspect no-preload-307409]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-307409 not found
	
	** /stderr **
	I1213 11:52:22.468404  603921 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:52:22.485064  603921 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0545902499c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:32:4c:cb:8d:7b} reservation:<nil>}
	I1213 11:52:22.485424  603921 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-de5fe2fbe3b8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:54:47:7f:e7:3a} reservation:<nil>}
	I1213 11:52:22.485663  603921 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7c96683190e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:0a:60:46:c5:4a} reservation:<nil>}
	I1213 11:52:22.485957  603921 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5b063c432202 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:6c:83:b3:7b:3a} reservation:<nil>}
	I1213 11:52:22.486426  603921 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bb83e0}
	I1213 11:52:22.486448  603921 network_create.go:124] attempt to create docker network no-preload-307409 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 11:52:22.486504  603921 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-307409 no-preload-307409
	I1213 11:52:22.561619  603921 network_create.go:108] docker network no-preload-307409 192.168.85.0/24 created
	I1213 11:52:22.561649  603921 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-307409" container
	I1213 11:52:22.561735  603921 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 11:52:22.577243  603921 cli_runner.go:164] Run: docker volume create no-preload-307409 --label name.minikube.sigs.k8s.io=no-preload-307409 --label created_by.minikube.sigs.k8s.io=true
	I1213 11:52:22.597274  603921 oci.go:103] Successfully created a docker volume no-preload-307409
	I1213 11:52:22.597374  603921 cli_runner.go:164] Run: docker run --rm --name no-preload-307409-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-307409 --entrypoint /usr/bin/test -v no-preload-307409:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 11:52:22.724954  603921 cache.go:162] opening:  /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1213 11:52:22.752376  603921 cache.go:162] opening:  /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1213 11:52:22.778070  603921 cache.go:162] opening:  /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1213 11:52:22.797264  603921 cache.go:162] opening:  /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1213 11:52:22.805390  603921 cache.go:162] opening:  /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1213 11:52:23.209223  603921 cache.go:157] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1213 11:52:23.209301  603921 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 806.245475ms
	I1213 11:52:23.209330  603921 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1213 11:52:23.266936  603921 oci.go:107] Successfully prepared a docker volume no-preload-307409
	I1213 11:52:23.266994  603921 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1213 11:52:23.267122  603921 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 11:52:23.267237  603921 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 11:52:23.342732  603921 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-307409 --name no-preload-307409 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-307409 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-307409 --network no-preload-307409 --ip 192.168.85.2 --volume no-preload-307409:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 11:52:23.695331  603921 cache.go:157] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1213 11:52:23.695405  603921 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.2929342s
	I1213 11:52:23.695435  603921 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 11:52:23.714188  603921 cache.go:157] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1213 11:52:23.714266  603921 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.312276464s
	I1213 11:52:23.714295  603921 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 11:52:23.746751  603921 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Running}}
	I1213 11:52:23.749641  603921 cache.go:157] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1213 11:52:23.749678  603921 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.346893086s
	I1213 11:52:23.749691  603921 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1213 11:52:23.778616  603921 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 11:52:23.802046  603921 cli_runner.go:164] Run: docker exec no-preload-307409 stat /var/lib/dpkg/alternatives/iptables
	I1213 11:52:23.818032  603921 cache.go:157] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1213 11:52:23.818058  603921 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.417329777s
	I1213 11:52:23.818070  603921 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 11:52:23.818085  603921 cache.go:87] Successfully saved all images to host disk.
	I1213 11:52:23.869927  603921 oci.go:144] the created container "no-preload-307409" has a running status.
	I1213 11:52:23.869977  603921 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa...
	I1213 11:52:23.990936  603921 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 11:52:24.020412  603921 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 11:52:24.046398  603921 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 11:52:24.046421  603921 kic_runner.go:114] Args: [docker exec --privileged no-preload-307409 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 11:52:24.114724  603921 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 11:52:24.145665  603921 machine.go:94] provisionDockerMachine start ...
	I1213 11:52:24.145765  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:24.178680  603921 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:24.179021  603921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1213 11:52:24.179031  603921 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:52:24.179772  603921 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 11:52:27.331003  603921 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-307409
	
	I1213 11:52:27.331028  603921 ubuntu.go:182] provisioning hostname "no-preload-307409"
	I1213 11:52:27.331091  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:27.350635  603921 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:27.351104  603921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1213 11:52:27.351127  603921 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-307409 && echo "no-preload-307409" | sudo tee /etc/hostname
	I1213 11:52:27.517546  603921 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-307409
	
	I1213 11:52:27.517640  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:27.537725  603921 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:27.538047  603921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1213 11:52:27.538069  603921 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-307409' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-307409/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-307409' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:52:27.687673  603921 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:52:27.687762  603921 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 11:52:27.687826  603921 ubuntu.go:190] setting up certificates
	I1213 11:52:27.687859  603921 provision.go:84] configureAuth start
	I1213 11:52:27.687988  603921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 11:52:27.704465  603921 provision.go:143] copyHostCerts
	I1213 11:52:27.704533  603921 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 11:52:27.704542  603921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 11:52:27.704618  603921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 11:52:27.704711  603921 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 11:52:27.704717  603921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 11:52:27.704742  603921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 11:52:27.704793  603921 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 11:52:27.704798  603921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 11:52:27.704821  603921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 11:52:27.704870  603921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.no-preload-307409 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-307409]
	I1213 11:52:27.799233  603921 provision.go:177] copyRemoteCerts
	I1213 11:52:27.799303  603921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:52:27.799354  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:27.816072  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 11:52:27.919366  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:52:27.939339  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:52:27.957398  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:52:27.977029  603921 provision.go:87] duration metric: took 289.128062ms to configureAuth
	I1213 11:52:27.977059  603921 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:52:27.977329  603921 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:52:27.977459  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:27.994988  603921 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:27.995311  603921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1213 11:52:27.995346  603921 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:52:28.387156  603921 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:52:28.387181  603921 machine.go:97] duration metric: took 4.241492519s to provisionDockerMachine
	I1213 11:52:28.387193  603921 client.go:176] duration metric: took 5.957805202s to LocalClient.Create
	I1213 11:52:28.387207  603921 start.go:167] duration metric: took 5.957873469s to libmachine.API.Create "no-preload-307409"
	I1213 11:52:28.387215  603921 start.go:293] postStartSetup for "no-preload-307409" (driver="docker")
	I1213 11:52:28.387226  603921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:52:28.387291  603921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:52:28.387336  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:28.404972  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 11:52:28.515880  603921 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:52:28.519219  603921 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:52:28.519251  603921 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:52:28.519263  603921 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 11:52:28.519320  603921 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 11:52:28.519410  603921 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 11:52:28.519562  603921 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:52:28.526963  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:52:28.545859  603921 start.go:296] duration metric: took 158.63039ms for postStartSetup
	I1213 11:52:28.546269  603921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 11:52:28.571235  603921 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/config.json ...
	I1213 11:52:28.571559  603921 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:52:28.571611  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:28.589707  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 11:52:28.696545  603921 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:52:28.701185  603921 start.go:128] duration metric: took 6.27750586s to createHost
	I1213 11:52:28.701209  603921 start.go:83] releasing machines lock for "no-preload-307409", held for 6.27775003s
	I1213 11:52:28.701287  603921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 11:52:28.718595  603921 ssh_runner.go:195] Run: cat /version.json
	I1213 11:52:28.718648  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:28.718908  603921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:52:28.718966  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:28.745537  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 11:52:28.751810  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 11:52:28.847455  603921 ssh_runner.go:195] Run: systemctl --version
	I1213 11:52:28.969867  603921 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:52:29.010183  603921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:52:29.014670  603921 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:52:29.014799  603921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:52:29.046386  603921 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 11:52:29.046411  603921 start.go:496] detecting cgroup driver to use...
	I1213 11:52:29.046444  603921 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:52:29.046493  603921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:52:29.064822  603921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:52:29.078520  603921 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:52:29.078608  603921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:52:29.096990  603921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:52:29.116180  603921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:52:29.242070  603921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:52:29.378676  603921 docker.go:234] disabling docker service ...
	I1213 11:52:29.378760  603921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:52:29.401781  603921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:52:29.417362  603921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:52:29.558549  603921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:52:29.695156  603921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:52:29.709160  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:52:29.724923  603921 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 11:52:29.725028  603921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:29.733811  603921 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:52:29.733884  603921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:29.742902  603921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:29.752357  603921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:29.761431  603921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:52:29.770783  603921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:29.779375  603921 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:29.793009  603921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:29.802451  603921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:52:29.811164  603921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:52:29.818609  603921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:52:29.942303  603921 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:52:30.130461  603921 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:52:30.130567  603921 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:52:30.135067  603921 start.go:564] Will wait 60s for crictl version
	I1213 11:52:30.135148  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.139648  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:52:30.167916  603921 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:52:30.168057  603921 ssh_runner.go:195] Run: crio --version
	I1213 11:52:30.201235  603921 ssh_runner.go:195] Run: crio --version
	I1213 11:52:30.240166  603921 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 11:52:30.243017  603921 cli_runner.go:164] Run: docker network inspect no-preload-307409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:52:30.259990  603921 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 11:52:30.264096  603921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:52:30.274510  603921 kubeadm.go:884] updating cluster {Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:52:30.274625  603921 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:30.274673  603921 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:52:30.299868  603921 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1213 11:52:30.299895  603921 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 11:52:30.299939  603921 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:52:30.300144  603921 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:30.300228  603921 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:30.300318  603921 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:30.300422  603921 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:30.300512  603921 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1213 11:52:30.300599  603921 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1213 11:52:30.300694  603921 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:30.301694  603921 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:30.301935  603921 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:30.302103  603921 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1213 11:52:30.302258  603921 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:52:30.302557  603921 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1213 11:52:30.302733  603921 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:30.302971  603921 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:30.303142  603921 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:30.527419  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:30.555499  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1213 11:52:30.570567  603921 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1213 11:52:30.570662  603921 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:30.570728  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.584640  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:30.591270  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:30.595713  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1213 11:52:30.616170  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:30.619381  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:30.622807  603921 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1213 11:52:30.622860  603921 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1213 11:52:30.622946  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.623055  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:30.710823  603921 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1213 11:52:30.710983  603921 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:30.711082  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.710930  603921 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1213 11:52:30.711200  603921 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:30.711241  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.736060  603921 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1213 11:52:30.736163  603921 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1213 11:52:30.736234  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.740106  603921 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1213 11:52:30.740189  603921 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:30.740262  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.748359  603921 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1213 11:52:30.748463  603921 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:30.748511  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:30.748555  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.748628  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 11:52:30.748683  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:30.748738  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:30.748788  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 11:52:30.748845  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:30.856302  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:30.856487  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:30.856521  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:30.856573  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 11:52:30.856627  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 11:52:30.856653  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:30.856693  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:30.971700  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1213 11:52:30.971783  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:30.971816  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:30.971845  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 11:52:30.971874  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 11:52:30.971903  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:30.971935  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:30.972193  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1213 11:52:31.074055  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1213 11:52:31.074094  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1213 11:52:31.074184  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1213 11:52:31.074205  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1213 11:52:31.074277  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1213 11:52:31.074302  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1213 11:52:31.074328  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1213 11:52:31.074347  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1213 11:52:31.074371  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1213 11:52:31.074278  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1213 11:52:31.074412  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:31.074438  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1213 11:52:31.074484  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1213 11:52:31.112864  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1213 11:52:31.112902  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1213 11:52:31.112967  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1213 11:52:31.112980  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1213 11:52:31.123045  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1213 11:52:31.123083  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1213 11:52:31.123160  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1213 11:52:31.123177  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	W1213 11:52:31.139055  603921 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1213 11:52:31.139150  603921 retry.go:31] will retry after 147.135859ms: ssh: rejected: connect failed (open failed)
	I1213 11:52:31.139250  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1213 11:52:31.139295  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1213 11:52:31.139384  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:31.139650  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1213 11:52:31.139777  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1213 11:52:31.139869  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:31.188122  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 11:52:31.202626  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 11:52:31.288550  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:31.364969  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1213 11:52:31.365219  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1213 11:52:31.396116  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	W1213 11:52:31.547454  603921 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1213 11:52:31.547789  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:52:31.674247  603921 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1213 11:52:31.674310  603921 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:52:31.674373  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:31.683142  603921 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1213 11:52:31.683265  603921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1213 11:52:31.693453  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:52:32.082334  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1213 11:52:32.082370  603921 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1213 11:52:32.082422  603921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1213 11:52:32.082516  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:52:34.308510  603921 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.225962736s)
	I1213 11:52:34.308588  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:52:34.308603  603921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (2.22615852s)
	I1213 11:52:34.308620  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1213 11:52:34.308638  603921 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1213 11:52:34.308676  603921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1213 11:52:35.518931  603921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.210235098s)
	I1213 11:52:35.518963  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1213 11:52:35.518985  603921 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1213 11:52:35.519031  603921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1213 11:52:35.519092  603921 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.210494086s)
	I1213 11:52:35.519120  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1213 11:52:35.519184  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1213 11:52:37.072889  603921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.553831938s)
	I1213 11:52:37.072913  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1213 11:52:37.072933  603921 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1213 11:52:37.072981  603921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1213 11:52:37.073078  603921 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.553877116s)
	I1213 11:52:37.073095  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1213 11:52:37.073111  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1213 11:52:39.737650  603921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (2.664648689s)
	I1213 11:52:39.737681  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1213 11:52:39.737702  603921 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1213 11:52:39.737752  603921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1213 11:52:41.698779  603921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.960996821s)
	I1213 11:52:41.698804  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1213 11:52:41.698827  603921 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1213 11:52:41.698877  603921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1213 11:52:43.447904  603921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.748996566s)
	I1213 11:52:43.447934  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1213 11:52:43.447952  603921 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1213 11:52:43.448001  603921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1213 11:52:44.178615  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1213 11:52:44.178655  603921 cache_images.go:125] Successfully loaded all cached images
	I1213 11:52:44.178662  603921 cache_images.go:94] duration metric: took 13.878753268s to LoadCachedImages
	I1213 11:52:44.178674  603921 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 11:52:44.178763  603921 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-307409 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:52:44.178851  603921 ssh_runner.go:195] Run: crio config
	I1213 11:52:44.242383  603921 cni.go:84] Creating CNI manager for ""
	I1213 11:52:44.242401  603921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:52:44.242418  603921 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:52:44.242441  603921 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-307409 NodeName:no-preload-307409 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:52:44.242555  603921 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-307409"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:52:44.242622  603921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:52:44.254521  603921 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1213 11:52:44.254582  603921 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:52:44.274613  603921 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1213 11:52:44.274705  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1213 11:52:44.275568  603921 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet
	I1213 11:52:44.278466  603921 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm
	I1213 11:52:44.279131  603921 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1213 11:52:44.279162  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1213 11:52:45.122331  603921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:52:45.166456  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1213 11:52:45.191725  603921 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1213 11:52:45.191781  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1213 11:52:45.304315  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1213 11:52:45.334054  603921 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1213 11:52:45.334112  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1213 11:52:46.015388  603921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:52:46.024888  603921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 11:52:46.040762  603921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:52:46.056856  603921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 11:52:46.080441  603921 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:52:46.084885  603921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:52:46.097815  603921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:52:46.230479  603921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:52:46.251958  603921 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409 for IP: 192.168.85.2
	I1213 11:52:46.251982  603921 certs.go:195] generating shared ca certs ...
	I1213 11:52:46.251998  603921 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:46.252212  603921 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:52:46.252287  603921 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:52:46.252302  603921 certs.go:257] generating profile certs ...
	I1213 11:52:46.252373  603921 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key
	I1213 11:52:46.252392  603921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.crt with IP's: []
	I1213 11:52:46.687159  603921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.crt ...
	I1213 11:52:46.687196  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.crt: {Name:mkd3b6de93eb4d0d7c38606e110ec8041a7a8b50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:46.687382  603921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key ...
	I1213 11:52:46.687530  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key: {Name:mk69f4e38edb3a6758b30b8919bec09ed6524780 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:46.687680  603921 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b
	I1213 11:52:46.687705  603921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 11:52:47.101196  603921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b ...
	I1213 11:52:47.101275  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b: {Name:mkf348306e6448fd779f0c40568bfbc2591db27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.101515  603921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b ...
	I1213 11:52:47.101554  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b: {Name:mk67006fcc87c7852dc9dd2baf2e5c091f89fb64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.101697  603921 certs.go:382] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt
	I1213 11:52:47.101816  603921 certs.go:386] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key
	I1213 11:52:47.101906  603921 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key
	I1213 11:52:47.101964  603921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt with IP's: []
	I1213 11:52:47.391626  603921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt ...
	I1213 11:52:47.391702  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt: {Name:mk6bf9ff3c46be8a69edc887a1d740e84c930536 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.391910  603921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key ...
	I1213 11:52:47.391946  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key: {Name:mk5282a1a4966c51394d6aeb663ae12cef8b3a1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.392186  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:52:47.392256  603921 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:52:47.392281  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:52:47.392345  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:52:47.392401  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:52:47.392449  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:52:47.392534  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:52:47.393177  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:52:47.413169  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:52:47.433634  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:52:47.456446  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:52:47.475453  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:52:47.495921  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 11:52:47.516359  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:52:47.533557  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:52:47.553686  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:52:47.576528  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:52:47.595023  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:52:47.617574  603921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:52:47.632766  603921 ssh_runner.go:195] Run: openssl version
	I1213 11:52:47.642255  603921 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.651062  603921 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:52:47.660280  603921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.665117  603921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.665212  603921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.711366  603921 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:52:47.719094  603921 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 11:52:47.727218  603921 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.735147  603921 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:52:47.743430  603921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.748386  603921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.748477  603921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.811036  603921 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:52:47.824172  603921 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/356328.pem /etc/ssl/certs/51391683.0
	I1213 11:52:47.833720  603921 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.842937  603921 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:52:47.852257  603921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.857336  603921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.857459  603921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.913987  603921 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:47.923742  603921 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3563282.pem /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:47.932105  603921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:52:47.937831  603921 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 11:52:47.937953  603921 kubeadm.go:401] StartCluster: {Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:52:47.938056  603921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:52:47.938131  603921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:52:47.977617  603921 cri.go:89] found id: ""
	I1213 11:52:47.977734  603921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:52:47.986677  603921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:52:47.995428  603921 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:52:47.995568  603921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:52:48.012929  603921 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:52:48.013001  603921 kubeadm.go:158] found existing configuration files:
	
	I1213 11:52:48.013078  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:52:48.023587  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:52:48.023720  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:52:48.033048  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:52:48.042898  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:52:48.043030  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:52:48.052336  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:52:48.062442  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:52:48.062560  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:52:48.071404  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:52:48.081302  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:52:48.081415  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:52:48.090412  603921 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:52:48.139895  603921 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:52:48.140310  603921 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:52:48.244346  603921 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:52:48.244445  603921 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:52:48.244514  603921 kubeadm.go:319] OS: Linux
	I1213 11:52:48.244581  603921 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:52:48.244649  603921 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:52:48.244717  603921 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:52:48.244785  603921 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:52:48.244849  603921 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:52:48.244917  603921 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:52:48.244983  603921 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:52:48.245052  603921 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:52:48.245113  603921 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:52:48.326956  603921 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:52:48.327125  603921 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:52:48.327254  603921 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:52:48.353781  603921 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:52:48.362615  603921 out.go:252]   - Generating certificates and keys ...
	I1213 11:52:48.362749  603921 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:52:48.362861  603921 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:52:48.406340  603921 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:52:48.617898  603921 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:52:48.894950  603921 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:52:49.002897  603921 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 11:52:49.595632  603921 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 11:52:49.596022  603921 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 11:52:49.703067  603921 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 11:52:49.703500  603921 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 11:52:49.852748  603921 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 11:52:49.985441  603921 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 11:52:50.361702  603921 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 11:52:50.362007  603921 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:52:50.448441  603921 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:52:50.524868  603921 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:52:51.254957  603921 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:52:51.473347  603921 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:52:51.686418  603921 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:52:51.686517  603921 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:52:51.690277  603921 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:52:51.694117  603921 out.go:252]   - Booting up control plane ...
	I1213 11:52:51.694231  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:52:51.694310  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:52:51.695018  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:52:51.714016  603921 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:52:51.714689  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:52:51.728439  603921 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:52:51.728548  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:52:51.728589  603921 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:52:51.918802  603921 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:52:51.918928  603921 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:56:51.920072  603921 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001224221s
	I1213 11:56:51.920104  603921 kubeadm.go:319] 
	I1213 11:56:51.920212  603921 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:56:51.920270  603921 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:56:51.920608  603921 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:56:51.920619  603921 kubeadm.go:319] 
	I1213 11:56:51.920812  603921 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:56:51.920869  603921 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:56:51.921157  603921 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:56:51.921165  603921 kubeadm.go:319] 
	I1213 11:56:51.925513  603921 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:56:51.926006  603921 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:56:51.926180  603921 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:56:51.926479  603921 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:56:51.926517  603921 kubeadm.go:319] 
	W1213 11:56:51.926771  603921 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001224221s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001224221s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 11:56:51.926983  603921 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 11:56:51.927241  603921 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:56:52.337349  603921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:56:52.355756  603921 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:56:52.355865  603921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:56:52.364798  603921 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:56:52.364819  603921 kubeadm.go:158] found existing configuration files:
	
	I1213 11:56:52.364872  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:56:52.373016  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:56:52.373085  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:56:52.380868  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:56:52.388839  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:56:52.388908  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:56:52.396493  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:56:52.404428  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:56:52.404492  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:56:52.412543  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:56:52.420710  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:56:52.420784  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:56:52.428931  603921 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:56:52.469486  603921 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:56:52.469812  603921 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:56:52.544538  603921 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:56:52.544634  603921 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:56:52.544691  603921 kubeadm.go:319] OS: Linux
	I1213 11:56:52.544758  603921 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:56:52.544826  603921 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:56:52.544893  603921 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:56:52.544959  603921 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:56:52.545027  603921 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:56:52.545094  603921 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:56:52.545159  603921 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:56:52.545225  603921 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:56:52.545290  603921 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:56:52.613010  603921 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:56:52.613120  603921 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:56:52.613213  603921 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:56:52.631911  603921 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:56:52.635687  603921 out.go:252]   - Generating certificates and keys ...
	I1213 11:56:52.635862  603921 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:56:52.635952  603921 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:56:52.636046  603921 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 11:56:52.636157  603921 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 11:56:52.636251  603921 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 11:56:52.636343  603921 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 11:56:52.636411  603921 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 11:56:52.636489  603921 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 11:56:52.636569  603921 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 11:56:52.636650  603921 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 11:56:52.636696  603921 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 11:56:52.636757  603921 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:56:52.776698  603921 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:56:52.958761  603921 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:56:53.117866  603921 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:56:53.292950  603921 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:56:53.736752  603921 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:56:53.737374  603921 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:56:53.739900  603921 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:56:53.743260  603921 out.go:252]   - Booting up control plane ...
	I1213 11:56:53.743409  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:56:53.743561  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:56:53.743673  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:56:53.757211  603921 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:56:53.757338  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:56:53.765875  603921 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:56:53.766984  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:56:53.767070  603921 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:56:53.918187  603921 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:56:53.918313  603921 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 12:00:53.918383  603921 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00010332s
	I1213 12:00:53.918411  603921 kubeadm.go:319] 
	I1213 12:00:53.918468  603921 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 12:00:53.918502  603921 kubeadm.go:319] 	- The kubelet is not running
	I1213 12:00:53.918607  603921 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 12:00:53.918611  603921 kubeadm.go:319] 
	I1213 12:00:53.918715  603921 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 12:00:53.918747  603921 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 12:00:53.918778  603921 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 12:00:53.918782  603921 kubeadm.go:319] 
	I1213 12:00:53.924880  603921 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 12:00:53.925344  603921 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 12:00:53.925460  603921 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 12:00:53.925729  603921 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 12:00:53.925740  603921 kubeadm.go:319] 
	I1213 12:00:53.925866  603921 kubeadm.go:403] duration metric: took 8m5.987919453s to StartCluster
	I1213 12:00:53.925907  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:00:53.925972  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:00:53.926107  603921 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 12:00:53.953173  603921 cri.go:89] found id: ""
	I1213 12:00:53.953257  603921 logs.go:282] 0 containers: []
	W1213 12:00:53.953275  603921 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:00:53.953283  603921 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:00:53.953363  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:00:53.984628  603921 cri.go:89] found id: ""
	I1213 12:00:53.984655  603921 logs.go:282] 0 containers: []
	W1213 12:00:53.984665  603921 logs.go:284] No container was found matching "etcd"
	I1213 12:00:53.984671  603921 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:00:53.984731  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:00:54.014942  603921 cri.go:89] found id: ""
	I1213 12:00:54.014969  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.014978  603921 logs.go:284] No container was found matching "coredns"
	I1213 12:00:54.014986  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:00:54.015045  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:00:54.064854  603921 cri.go:89] found id: ""
	I1213 12:00:54.064881  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.064890  603921 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:00:54.064897  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:00:54.064981  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:00:54.132162  603921 cri.go:89] found id: ""
	I1213 12:00:54.132187  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.132195  603921 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:00:54.132201  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:00:54.132311  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:00:54.159680  603921 cri.go:89] found id: ""
	I1213 12:00:54.159703  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.159712  603921 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:00:54.159718  603921 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:00:54.159779  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:00:54.185867  603921 cri.go:89] found id: ""
	I1213 12:00:54.185893  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.185902  603921 logs.go:284] No container was found matching "kindnet"
	I1213 12:00:54.185912  603921 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:00:54.185923  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:00:54.228270  603921 logs.go:123] Gathering logs for container status ...
	I1213 12:00:54.228303  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:00:54.257730  603921 logs.go:123] Gathering logs for kubelet ...
	I1213 12:00:54.257759  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:00:54.324854  603921 logs.go:123] Gathering logs for dmesg ...
	I1213 12:00:54.324892  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:00:54.342225  603921 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:00:54.342252  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:00:54.409722  603921 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:00:54.400901    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.401672    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403289    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403849    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.405570    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:00:54.400901    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.401672    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403289    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403849    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.405570    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:00:54.409752  603921 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00010332s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 12:00:54.409821  603921 out.go:285] * 
	* 
	W1213 12:00:54.410005  603921 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00010332s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00010332s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 12:00:54.410026  603921 out.go:285] * 
	* 
	W1213 12:00:54.412399  603921 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 12:00:54.417573  603921 out.go:203] 
	W1213 12:00:54.420481  603921 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00010332s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00010332s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 12:00:54.420529  603921 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 12:00:54.420553  603921 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 12:00:54.423665  603921 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-307409
helpers_test.go:244: (dbg) docker inspect no-preload-307409:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a",
	        "Created": "2025-12-13T11:52:23.357834479Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 604226,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:52:23.426122666Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/hosts",
	        "LogPath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a-json.log",
	        "Name": "/no-preload-307409",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-307409:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-307409",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a",
	                "LowerDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-307409",
	                "Source": "/var/lib/docker/volumes/no-preload-307409/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-307409",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-307409",
	                "name.minikube.sigs.k8s.io": "no-preload-307409",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3bbb75ba869ad4e24d065678acb24f13b332d42f86102a96ce228c9f56900de1",
	            "SandboxKey": "/var/run/docker/netns/3bbb75ba869a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-307409": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:08:52:80:ec:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "280e424abad6162e6fbaaf316b3c6095ab0d80a59a1f82eb556a84b2dd4f139a",
	                    "EndpointID": "fa43d8567fac17df2e79f566f84f62b5ae267b3a77d79f87cf8d10e233d98a54",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-307409",
	                        "9fe6186bf0c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-307409 -n no-preload-307409
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-307409 -n no-preload-307409: exit status 6 (389.635472ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 12:00:54.904067  617038 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-307409" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-307409 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p old-k8s-version-051699                                                                                                                                                                                                                            │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ delete  │ -p old-k8s-version-051699                                                                                                                                                                                                                            │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p cert-expiration-420007 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                            │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ delete  │ -p cert-expiration-420007                                                                                                                                                                                                                            │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-151605 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-151605 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-151605 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p embed-certs-326948 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │                     │
	│ stop    │ -p embed-certs-326948 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-326948 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:52 UTC │
	│ image   │ default-k8s-diff-port-151605 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p default-k8s-diff-port-151605 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p disable-driver-mounts-072590                                                                                                                                                                                                                      │ disable-driver-mounts-072590 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ image   │ embed-certs-326948 image list --format=json                                                                                                                                                                                                          │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p embed-certs-326948 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p embed-certs-326948                                                                                                                                                                                                                                │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p embed-certs-326948                                                                                                                                                                                                                                │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p newest-cni-800979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:52:44
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:52:44.222945  607523 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:52:44.223057  607523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:44.223099  607523 out.go:374] Setting ErrFile to fd 2...
	I1213 11:52:44.223106  607523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:44.223364  607523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:52:44.223812  607523 out.go:368] Setting JSON to false
	I1213 11:52:44.224724  607523 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12917,"bootTime":1765613848,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:52:44.224797  607523 start.go:143] virtualization:  
	I1213 11:52:44.228935  607523 out.go:179] * [newest-cni-800979] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:52:44.232087  607523 notify.go:221] Checking for updates...
	I1213 11:52:44.232862  607523 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:52:44.236046  607523 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:52:44.241086  607523 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:52:44.244482  607523 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:52:44.247343  607523 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:52:44.250267  607523 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:52:44.253709  607523 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:52:44.253853  607523 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:52:44.284666  607523 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:52:44.284774  607523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:52:44.401910  607523 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-13 11:52:44.38729859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:52:44.402031  607523 docker.go:319] overlay module found
	I1213 11:52:44.405585  607523 out.go:179] * Using the docker driver based on user configuration
	I1213 11:52:44.408428  607523 start.go:309] selected driver: docker
	I1213 11:52:44.408454  607523 start.go:927] validating driver "docker" against <nil>
	I1213 11:52:44.408468  607523 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:52:44.409713  607523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:52:44.548406  607523 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-13 11:52:44.53777287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:52:44.548555  607523 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 11:52:44.548581  607523 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 11:52:44.549476  607523 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 11:52:44.552258  607523 out.go:179] * Using Docker driver with root privileges
	I1213 11:52:44.555279  607523 cni.go:84] Creating CNI manager for ""
	I1213 11:52:44.555356  607523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:52:44.555365  607523 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 11:52:44.555448  607523 start.go:353] cluster config:
	{Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:52:44.558889  607523 out.go:179] * Starting "newest-cni-800979" primary control-plane node in "newest-cni-800979" cluster
	I1213 11:52:44.561893  607523 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:52:44.564946  607523 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:52:44.567939  607523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:44.568029  607523 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 11:52:44.568050  607523 cache.go:65] Caching tarball of preloaded images
	I1213 11:52:44.568145  607523 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 11:52:44.568156  607523 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 11:52:44.568295  607523 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json ...
	I1213 11:52:44.568315  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json: {Name:mkca051d0f4222f12ada2e542e9765aa1caaa1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:44.568460  607523 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:52:44.614235  607523 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:52:44.614511  607523 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:52:44.614568  607523 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:52:44.614617  607523 start.go:360] acquireMachinesLock for newest-cni-800979: {Name:mk98646479cdf6b123b7b6024833c6594650d415 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:44.614732  607523 start.go:364] duration metric: took 92.595µs to acquireMachinesLock for "newest-cni-800979"
	I1213 11:52:44.614763  607523 start.go:93] Provisioning new machine with config: &{Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:52:44.614850  607523 start.go:125] createHost starting for "" (driver="docker")
	I1213 11:52:43.447904  603921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.748996566s)
	I1213 11:52:43.447934  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1213 11:52:43.447952  603921 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1213 11:52:43.448001  603921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1213 11:52:44.178615  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1213 11:52:44.178655  603921 cache_images.go:125] Successfully loaded all cached images
	I1213 11:52:44.178662  603921 cache_images.go:94] duration metric: took 13.878753268s to LoadCachedImages
	I1213 11:52:44.178674  603921 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 11:52:44.178763  603921 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-307409 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:52:44.178851  603921 ssh_runner.go:195] Run: crio config
	I1213 11:52:44.242383  603921 cni.go:84] Creating CNI manager for ""
	I1213 11:52:44.242401  603921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:52:44.242418  603921 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:52:44.242441  603921 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-307409 NodeName:no-preload-307409 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:52:44.242555  603921 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-307409"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:52:44.242622  603921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:52:44.254521  603921 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1213 11:52:44.254582  603921 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:52:44.274613  603921 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1213 11:52:44.274705  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1213 11:52:44.275568  603921 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet
	I1213 11:52:44.278466  603921 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm
	I1213 11:52:44.279131  603921 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1213 11:52:44.279162  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1213 11:52:45.122331  603921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:52:45.166456  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1213 11:52:45.191725  603921 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1213 11:52:45.191781  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1213 11:52:45.304315  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1213 11:52:45.334054  603921 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1213 11:52:45.334112  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1213 11:52:46.015388  603921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:52:46.024888  603921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 11:52:46.040762  603921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:52:46.056856  603921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 11:52:46.080441  603921 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:52:46.084885  603921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:52:46.097815  603921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:52:46.230479  603921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:52:46.251958  603921 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409 for IP: 192.168.85.2
	I1213 11:52:46.251982  603921 certs.go:195] generating shared ca certs ...
	I1213 11:52:46.251998  603921 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:46.252212  603921 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:52:46.252287  603921 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:52:46.252302  603921 certs.go:257] generating profile certs ...
	I1213 11:52:46.252373  603921 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key
	I1213 11:52:46.252392  603921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.crt with IP's: []
	I1213 11:52:46.687159  603921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.crt ...
	I1213 11:52:46.687196  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.crt: {Name:mkd3b6de93eb4d0d7c38606e110ec8041a7a8b50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:46.687382  603921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key ...
	I1213 11:52:46.687530  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key: {Name:mk69f4e38edb3a6758b30b8919bec09ed6524780 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:46.687680  603921 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b
	I1213 11:52:46.687705  603921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 11:52:47.101196  603921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b ...
	I1213 11:52:47.101275  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b: {Name:mkf348306e6448fd779f0c40568bfbc2591db27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.101515  603921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b ...
	I1213 11:52:47.101554  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b: {Name:mk67006fcc87c7852dc9dd2baf2e5c091f89fb64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.101697  603921 certs.go:382] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt
	I1213 11:52:47.101816  603921 certs.go:386] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key
	I1213 11:52:47.101906  603921 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key
	I1213 11:52:47.101964  603921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt with IP's: []
	I1213 11:52:47.391626  603921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt ...
	I1213 11:52:47.391702  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt: {Name:mk6bf9ff3c46be8a69edc887a1d740e84c930536 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.391910  603921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key ...
	I1213 11:52:47.391946  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key: {Name:mk5282a1a4966c51394d6aeb663ae12cef8b3a1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.392186  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:52:47.392256  603921 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:52:47.392281  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:52:47.392345  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:52:47.392401  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:52:47.392449  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:52:47.392534  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:52:47.393177  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:52:47.413169  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:52:47.433634  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:52:47.456446  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:52:47.475453  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:52:47.495921  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 11:52:47.516359  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:52:47.533557  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:52:47.553686  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:52:47.576528  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:52:47.595023  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:52:47.617574  603921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:52:47.632766  603921 ssh_runner.go:195] Run: openssl version
	I1213 11:52:47.642255  603921 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.651062  603921 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:52:47.660280  603921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.665117  603921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.665212  603921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.711366  603921 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:52:47.719094  603921 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 11:52:47.727218  603921 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.735147  603921 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:52:47.743430  603921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.748386  603921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.748477  603921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.811036  603921 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:52:47.824172  603921 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/356328.pem /etc/ssl/certs/51391683.0
	I1213 11:52:47.833720  603921 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.842937  603921 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:52:47.852257  603921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.857336  603921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.857459  603921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.913987  603921 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:47.923742  603921 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3563282.pem /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:47.932105  603921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:52:47.937831  603921 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 11:52:47.937953  603921 kubeadm.go:401] StartCluster: {Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:52:47.938056  603921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:52:47.938131  603921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:52:47.977617  603921 cri.go:89] found id: ""
	I1213 11:52:47.977734  603921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:52:47.986677  603921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:52:47.995428  603921 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:52:47.995568  603921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:52:48.012929  603921 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:52:48.013001  603921 kubeadm.go:158] found existing configuration files:
	
	I1213 11:52:48.013078  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:52:48.023587  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:52:48.023720  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:52:48.033048  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:52:48.042898  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:52:48.043030  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:52:48.052336  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:52:48.062442  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:52:48.062560  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:52:48.071404  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:52:48.081302  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:52:48.081415  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:52:48.090412  603921 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:52:48.139895  603921 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:52:48.140310  603921 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:52:48.244346  603921 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:52:48.244445  603921 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:52:48.244514  603921 kubeadm.go:319] OS: Linux
	I1213 11:52:48.244581  603921 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:52:48.244649  603921 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:52:48.244717  603921 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:52:48.244785  603921 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:52:48.244849  603921 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:52:48.244917  603921 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:52:48.244983  603921 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:52:48.245052  603921 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:52:48.245113  603921 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:52:48.326956  603921 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:52:48.327125  603921 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:52:48.327254  603921 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:52:48.353781  603921 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:52:44.618660  607523 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 11:52:44.618986  607523 start.go:159] libmachine.API.Create for "newest-cni-800979" (driver="docker")
	I1213 11:52:44.619024  607523 client.go:173] LocalClient.Create starting
	I1213 11:52:44.619095  607523 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem
	I1213 11:52:44.619134  607523 main.go:143] libmachine: Decoding PEM data...
	I1213 11:52:44.619169  607523 main.go:143] libmachine: Parsing certificate...
	I1213 11:52:44.619234  607523 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem
	I1213 11:52:44.619259  607523 main.go:143] libmachine: Decoding PEM data...
	I1213 11:52:44.619275  607523 main.go:143] libmachine: Parsing certificate...
	I1213 11:52:44.619828  607523 cli_runner.go:164] Run: docker network inspect newest-cni-800979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 11:52:44.681886  607523 cli_runner.go:211] docker network inspect newest-cni-800979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 11:52:44.682019  607523 network_create.go:284] running [docker network inspect newest-cni-800979] to gather additional debugging logs...
	I1213 11:52:44.682044  607523 cli_runner.go:164] Run: docker network inspect newest-cni-800979
	W1213 11:52:44.783263  607523 cli_runner.go:211] docker network inspect newest-cni-800979 returned with exit code 1
	I1213 11:52:44.783303  607523 network_create.go:287] error running [docker network inspect newest-cni-800979]: docker network inspect newest-cni-800979: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-800979 not found
	I1213 11:52:44.783456  607523 network_create.go:289] output of [docker network inspect newest-cni-800979]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-800979 not found
	
	** /stderr **
	I1213 11:52:44.783853  607523 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:52:44.869365  607523 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0545902499c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:32:4c:cb:8d:7b} reservation:<nil>}
	I1213 11:52:44.869936  607523 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-de5fe2fbe3b8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:54:47:7f:e7:3a} reservation:<nil>}
	I1213 11:52:44.870324  607523 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7c96683190e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:0a:60:46:c5:4a} reservation:<nil>}
	I1213 11:52:44.872231  607523 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 11:52:44.872625  607523 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-280e424abad6 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:5e:ad:5b:52:ee:cb} reservation:<nil>}
	I1213 11:52:44.873100  607523 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a0a730}
	I1213 11:52:44.873121  607523 network_create.go:124] attempt to create docker network newest-cni-800979 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1213 11:52:44.873186  607523 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-800979 newest-cni-800979
	I1213 11:52:45.033952  607523 network_create.go:108] docker network newest-cni-800979 192.168.94.0/24 created
	I1213 11:52:45.033989  607523 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-800979" container
	I1213 11:52:45.034089  607523 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 11:52:45.110922  607523 cli_runner.go:164] Run: docker volume create newest-cni-800979 --label name.minikube.sigs.k8s.io=newest-cni-800979 --label created_by.minikube.sigs.k8s.io=true
	I1213 11:52:45.147181  607523 oci.go:103] Successfully created a docker volume newest-cni-800979
	I1213 11:52:45.148756  607523 cli_runner.go:164] Run: docker run --rm --name newest-cni-800979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800979 --entrypoint /usr/bin/test -v newest-cni-800979:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 11:52:46.576150  607523 cli_runner.go:217] Completed: docker run --rm --name newest-cni-800979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800979 --entrypoint /usr/bin/test -v newest-cni-800979:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.427287827s)
	I1213 11:52:46.576182  607523 oci.go:107] Successfully prepared a docker volume newest-cni-800979
	I1213 11:52:46.576222  607523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:46.576231  607523 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 11:52:46.576286  607523 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-800979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 11:52:48.362615  603921 out.go:252]   - Generating certificates and keys ...
	I1213 11:52:48.362749  603921 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:52:48.362861  603921 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:52:48.406340  603921 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:52:48.617898  603921 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:52:48.894950  603921 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:52:49.002897  603921 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 11:52:49.595632  603921 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 11:52:49.596022  603921 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 11:52:49.703067  603921 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 11:52:49.703500  603921 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 11:52:49.852748  603921 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 11:52:49.985441  603921 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 11:52:50.361702  603921 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 11:52:50.362007  603921 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:52:50.448441  603921 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:52:50.524868  603921 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:52:51.254957  603921 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:52:51.473347  603921 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:52:51.686418  603921 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:52:51.686517  603921 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:52:51.690277  603921 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:52:51.694117  603921 out.go:252]   - Booting up control plane ...
	I1213 11:52:51.694231  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:52:51.694310  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:52:51.695018  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:52:51.714016  603921 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:52:51.714689  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:52:51.728439  603921 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:52:51.728548  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:52:51.728589  603921 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:52:51.918802  603921 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:52:51.918928  603921 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:52:51.477960  607523 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-800979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.901639858s)
	I1213 11:52:51.478004  607523 kic.go:203] duration metric: took 4.901755297s to extract preloaded images to volume ...
	W1213 11:52:51.478154  607523 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 11:52:51.478257  607523 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 11:52:51.600099  607523 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-800979 --name newest-cni-800979 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800979 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-800979 --network newest-cni-800979 --ip 192.168.94.2 --volume newest-cni-800979:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 11:52:52.003446  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Running}}
	I1213 11:52:52.025630  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 11:52:52.044945  607523 cli_runner.go:164] Run: docker exec newest-cni-800979 stat /var/lib/dpkg/alternatives/iptables
	I1213 11:52:52.103780  607523 oci.go:144] the created container "newest-cni-800979" has a running status.
	I1213 11:52:52.103827  607523 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa...
	I1213 11:52:52.454986  607523 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 11:52:52.499855  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 11:52:52.520167  607523 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 11:52:52.520186  607523 kic_runner.go:114] Args: [docker exec --privileged newest-cni-800979 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 11:52:52.595209  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 11:52:52.616614  607523 machine.go:94] provisionDockerMachine start ...
	I1213 11:52:52.616710  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:52.645695  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:52.646054  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:52.646065  607523 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:52:52.646853  607523 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49104->127.0.0.1:33463: read: connection reset by peer
	I1213 11:52:55.795509  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-800979
	
	I1213 11:52:55.795546  607523 ubuntu.go:182] provisioning hostname "newest-cni-800979"
	I1213 11:52:55.795609  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:55.823768  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:55.824086  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:55.824105  607523 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-800979 && echo "newest-cni-800979" | sudo tee /etc/hostname
	I1213 11:52:55.984531  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-800979
	
	I1213 11:52:55.984627  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:56.004427  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:56.004789  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:56.004806  607523 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-800979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-800979/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-800979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:52:56.155779  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:52:56.155809  607523 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 11:52:56.155840  607523 ubuntu.go:190] setting up certificates
	I1213 11:52:56.155849  607523 provision.go:84] configureAuth start
	I1213 11:52:56.155916  607523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 11:52:56.173051  607523 provision.go:143] copyHostCerts
	I1213 11:52:56.173126  607523 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 11:52:56.173140  607523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 11:52:56.173218  607523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 11:52:56.173314  607523 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 11:52:56.173326  607523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 11:52:56.173354  607523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 11:52:56.173407  607523 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 11:52:56.173416  607523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 11:52:56.173440  607523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 11:52:56.173493  607523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.newest-cni-800979 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-800979]
	I1213 11:52:56.495741  607523 provision.go:177] copyRemoteCerts
	I1213 11:52:56.495819  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:52:56.495860  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:56.513776  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:56.623272  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:52:56.640893  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:52:56.658251  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:52:56.675898  607523 provision.go:87] duration metric: took 520.035144ms to configureAuth
	I1213 11:52:56.675924  607523 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:52:56.676119  607523 config.go:182] Loaded profile config "newest-cni-800979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:52:56.676229  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:56.693573  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:56.693885  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:56.693913  607523 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:52:57.000433  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:52:57.000459  607523 machine.go:97] duration metric: took 4.383824523s to provisionDockerMachine
	I1213 11:52:57.000471  607523 client.go:176] duration metric: took 12.381437402s to LocalClient.Create
	I1213 11:52:57.000485  607523 start.go:167] duration metric: took 12.381502329s to libmachine.API.Create "newest-cni-800979"
	I1213 11:52:57.000493  607523 start.go:293] postStartSetup for "newest-cni-800979" (driver="docker")
	I1213 11:52:57.000506  607523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:52:57.000573  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:52:57.000635  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.019654  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.123498  607523 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:52:57.126887  607523 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:52:57.126915  607523 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:52:57.126942  607523 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 11:52:57.127003  607523 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 11:52:57.127090  607523 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 11:52:57.127193  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:52:57.134628  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:52:57.153601  607523 start.go:296] duration metric: took 153.093637ms for postStartSetup
	I1213 11:52:57.154022  607523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 11:52:57.174170  607523 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json ...
	I1213 11:52:57.174465  607523 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:52:57.174516  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.191003  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.300652  607523 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:52:57.305941  607523 start.go:128] duration metric: took 12.691075107s to createHost
	I1213 11:52:57.305969  607523 start.go:83] releasing machines lock for "newest-cni-800979", held for 12.691222882s
	I1213 11:52:57.306067  607523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 11:52:57.324383  607523 ssh_runner.go:195] Run: cat /version.json
	I1213 11:52:57.324411  607523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:52:57.324436  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.324473  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.349379  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.349454  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.540188  607523 ssh_runner.go:195] Run: systemctl --version
	I1213 11:52:57.546743  607523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:52:57.581981  607523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:52:57.586210  607523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:52:57.586277  607523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:52:57.614440  607523 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 11:52:57.614460  607523 start.go:496] detecting cgroup driver to use...
	I1213 11:52:57.614492  607523 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:52:57.614539  607523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:52:57.632118  607523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:52:57.645277  607523 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:52:57.645361  607523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:52:57.663447  607523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:52:57.682384  607523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:52:57.805277  607523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:52:57.932514  607523 docker.go:234] disabling docker service ...
	I1213 11:52:57.932589  607523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:52:57.955202  607523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:52:57.968354  607523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:52:58.113128  607523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:52:58.247772  607523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:52:58.262298  607523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:52:58.277400  607523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 11:52:58.277526  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.287200  607523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:52:58.287335  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.296697  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.305672  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.315083  607523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:52:58.324248  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.333206  607523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.346564  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.355703  607523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:52:58.363253  607523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:52:58.370805  607523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:52:58.492125  607523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:52:58.663207  607523 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:52:58.663336  607523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:52:58.667219  607523 start.go:564] Will wait 60s for crictl version
	I1213 11:52:58.667334  607523 ssh_runner.go:195] Run: which crictl
	I1213 11:52:58.671116  607523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:52:58.697501  607523 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:52:58.697619  607523 ssh_runner.go:195] Run: crio --version
	I1213 11:52:58.733197  607523 ssh_runner.go:195] Run: crio --version
	I1213 11:52:58.768647  607523 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 11:52:58.771459  607523 cli_runner.go:164] Run: docker network inspect newest-cni-800979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:52:58.789274  607523 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1213 11:52:58.795116  607523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:52:58.812164  607523 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 11:52:58.814926  607523 kubeadm.go:884] updating cluster {Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:52:58.815100  607523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:58.815179  607523 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:52:58.855416  607523 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:52:58.855438  607523 crio.go:433] Images already preloaded, skipping extraction
	I1213 11:52:58.855493  607523 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:52:58.882823  607523 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:52:58.882846  607523 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:52:58.882855  607523 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 11:52:58.882940  607523 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-800979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:52:58.883028  607523 ssh_runner.go:195] Run: crio config
	I1213 11:52:58.937332  607523 cni.go:84] Creating CNI manager for ""
	I1213 11:52:58.937355  607523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:52:58.937377  607523 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 11:52:58.937402  607523 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-800979 NodeName:newest-cni-800979 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:52:58.937530  607523 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-800979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:52:58.937607  607523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:52:58.945256  607523 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:52:58.945332  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:52:58.952916  607523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 11:52:58.965421  607523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:52:58.978594  607523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1213 11:52:58.991343  607523 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:52:58.994981  607523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:52:59.006043  607523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:52:59.120731  607523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:52:59.136632  607523 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979 for IP: 192.168.94.2
	I1213 11:52:59.136650  607523 certs.go:195] generating shared ca certs ...
	I1213 11:52:59.136667  607523 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.136813  607523 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:52:59.136864  607523 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:52:59.136875  607523 certs.go:257] generating profile certs ...
	I1213 11:52:59.136930  607523 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.key
	I1213 11:52:59.136948  607523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.crt with IP's: []
	I1213 11:52:59.229537  607523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.crt ...
	I1213 11:52:59.229569  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.crt: {Name:mk69c62c6a65f19f1e9ae6f6006b84310e5ca69f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.229797  607523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.key ...
	I1213 11:52:59.229813  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.key: {Name:mk0d678e2df0ba46ea7a7d9db0beddac15d16cee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.229927  607523 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606
	I1213 11:52:59.229947  607523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1213 11:52:59.395722  607523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606 ...
	I1213 11:52:59.395753  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606: {Name:mk2f0d7037f2191b2fb310c8e6e39abce6919307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.395933  607523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606 ...
	I1213 11:52:59.395948  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606: {Name:mkeda4d05cf7f14a6919666348bb90fff24821e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.396035  607523 certs.go:382] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606 -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt
	I1213 11:52:59.396122  607523 certs.go:386] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606 -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key
	I1213 11:52:59.396187  607523 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key
	I1213 11:52:59.396205  607523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt with IP's: []
	I1213 11:52:59.677399  607523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt ...
	I1213 11:52:59.677431  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt: {Name:mk4f6f44ef9664fbc510805af3a0a5d8216b34d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.677617  607523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key ...
	I1213 11:52:59.677634  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key: {Name:mk08e1a717d212a6e36443fd4449253d4dfd4e34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.677867  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:52:59.677925  607523 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:52:59.677936  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:52:59.677963  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:52:59.677989  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:52:59.678018  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:52:59.678067  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:52:59.678646  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:52:59.697504  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:52:59.715937  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:52:59.733272  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:52:59.751842  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:52:59.769868  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:52:59.787032  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:52:59.804197  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:52:59.822307  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:52:59.840119  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:52:59.857580  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:52:59.875033  607523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:52:59.887226  607523 ssh_runner.go:195] Run: openssl version
	I1213 11:52:59.893568  607523 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.900683  607523 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:52:59.907927  607523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.911699  607523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.911785  607523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.952546  607523 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:59.959999  607523 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3563282.pem /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:59.967191  607523 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:59.974551  607523 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:52:59.981936  607523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:59.985667  607523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:59.985735  607523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:53:00.029636  607523 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:53:00.039949  607523 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 11:53:00.051259  607523 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.062203  607523 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:53:00.071922  607523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.077479  607523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.077644  607523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.129667  607523 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:53:00.145873  607523 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/356328.pem /etc/ssl/certs/51391683.0
	I1213 11:53:00.165719  607523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:53:00.182484  607523 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 11:53:00.182650  607523 kubeadm.go:401] StartCluster: {Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:53:00.191964  607523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:53:00.192781  607523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:53:00.308764  607523 cri.go:89] found id: ""
	I1213 11:53:00.308851  607523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:53:00.339801  607523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:53:00.369102  607523 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:53:00.369171  607523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:53:00.383298  607523 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:53:00.383367  607523 kubeadm.go:158] found existing configuration files:
	
	I1213 11:53:00.383424  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:53:00.395580  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:53:00.395656  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:53:00.405571  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:53:00.415778  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:53:00.415854  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:53:00.424800  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:53:00.434079  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:53:00.434162  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:53:00.443040  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:53:00.452144  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:53:00.452246  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:53:00.461542  607523 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:53:00.503183  607523 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:53:00.503307  607523 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:53:00.580961  607523 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:53:00.581064  607523 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:53:00.581117  607523 kubeadm.go:319] OS: Linux
	I1213 11:53:00.581167  607523 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:53:00.581226  607523 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:53:00.581277  607523 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:53:00.581327  607523 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:53:00.581379  607523 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:53:00.581429  607523 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:53:00.581478  607523 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:53:00.581529  607523 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:53:00.581581  607523 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:53:00.654422  607523 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:53:00.654539  607523 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:53:00.654635  607523 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:53:00.667854  607523 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:53:00.673949  607523 out.go:252]   - Generating certificates and keys ...
	I1213 11:53:00.674119  607523 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:53:00.674229  607523 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:53:00.749466  607523 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:53:00.853085  607523 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:53:01.087749  607523 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:53:01.312048  607523 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 11:53:01.513347  607523 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 11:53:01.513768  607523 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1213 11:53:01.838749  607523 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 11:53:01.839657  607523 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1213 11:53:02.478657  607523 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 11:53:02.876105  607523 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 11:53:03.010338  607523 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 11:53:03.010418  607523 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:53:03.200889  607523 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:53:03.653890  607523 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:53:04.344965  607523 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:53:04.580887  607523 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:53:04.785257  607523 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:53:04.787179  607523 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:53:04.796409  607523 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:53:04.799699  607523 out.go:252]   - Booting up control plane ...
	I1213 11:53:04.799829  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:53:04.799918  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:53:04.803001  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:53:04.836757  607523 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:53:04.837037  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:53:04.849469  607523 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:53:04.850109  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:53:04.853862  607523 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:53:05.015188  607523 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:53:05.015326  607523 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:56:51.920072  603921 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001224221s
	I1213 11:56:51.920104  603921 kubeadm.go:319] 
	I1213 11:56:51.920212  603921 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:56:51.920270  603921 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:56:51.920608  603921 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:56:51.920619  603921 kubeadm.go:319] 
	I1213 11:56:51.920812  603921 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:56:51.920869  603921 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:56:51.921157  603921 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:56:51.921165  603921 kubeadm.go:319] 
	I1213 11:56:51.925513  603921 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:56:51.926006  603921 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:56:51.926180  603921 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:56:51.926479  603921 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:56:51.926517  603921 kubeadm.go:319] 
	W1213 11:56:51.926771  603921 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001224221s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 11:56:51.926983  603921 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 11:56:51.927241  603921 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:56:52.337349  603921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:56:52.355756  603921 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:56:52.355865  603921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:56:52.364798  603921 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:56:52.364819  603921 kubeadm.go:158] found existing configuration files:
	
	I1213 11:56:52.364872  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:56:52.373016  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:56:52.373085  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:56:52.380868  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:56:52.388839  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:56:52.388908  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:56:52.396493  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:56:52.404428  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:56:52.404492  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:56:52.412543  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:56:52.420710  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:56:52.420784  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:56:52.428931  603921 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:56:52.469486  603921 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:56:52.469812  603921 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:56:52.544538  603921 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:56:52.544634  603921 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:56:52.544691  603921 kubeadm.go:319] OS: Linux
	I1213 11:56:52.544758  603921 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:56:52.544826  603921 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:56:52.544893  603921 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:56:52.544959  603921 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:56:52.545027  603921 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:56:52.545094  603921 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:56:52.545159  603921 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:56:52.545225  603921 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:56:52.545290  603921 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:56:52.613010  603921 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:56:52.613120  603921 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:56:52.613213  603921 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:56:52.631911  603921 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:56:52.635687  603921 out.go:252]   - Generating certificates and keys ...
	I1213 11:56:52.635862  603921 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:56:52.635952  603921 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:56:52.636046  603921 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 11:56:52.636157  603921 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 11:56:52.636251  603921 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 11:56:52.636343  603921 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 11:56:52.636411  603921 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 11:56:52.636489  603921 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 11:56:52.636569  603921 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 11:56:52.636650  603921 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 11:56:52.636696  603921 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 11:56:52.636757  603921 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:56:52.776698  603921 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:56:52.958761  603921 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:56:53.117866  603921 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:56:53.292950  603921 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:56:53.736752  603921 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:56:53.737374  603921 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:56:53.739900  603921 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:56:53.743260  603921 out.go:252]   - Booting up control plane ...
	I1213 11:56:53.743409  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:56:53.743561  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:56:53.743673  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:56:53.757211  603921 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:56:53.757338  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:56:53.765875  603921 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:56:53.766984  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:56:53.767070  603921 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:56:53.918187  603921 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:56:53.918313  603921 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:57:05.013826  607523 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000267538s
	I1213 11:57:05.013870  607523 kubeadm.go:319] 
	I1213 11:57:05.013935  607523 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:57:05.013971  607523 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:57:05.014088  607523 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:57:05.014096  607523 kubeadm.go:319] 
	I1213 11:57:05.014210  607523 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:57:05.014246  607523 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:57:05.014279  607523 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:57:05.014287  607523 kubeadm.go:319] 
	I1213 11:57:05.020057  607523 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:57:05.020490  607523 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:57:05.020604  607523 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:57:05.020844  607523 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:57:05.020856  607523 kubeadm.go:319] 
	I1213 11:57:05.020925  607523 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 11:57:05.021047  607523 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000267538s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 11:57:05.021134  607523 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 11:57:05.432952  607523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:57:05.445933  607523 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:57:05.446023  607523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:57:05.454556  607523 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:57:05.454578  607523 kubeadm.go:158] found existing configuration files:
	
	I1213 11:57:05.454629  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:57:05.462597  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:57:05.462670  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:57:05.470456  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:57:05.478316  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:57:05.478382  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:57:05.485947  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:57:05.494252  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:57:05.494320  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:57:05.502133  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:57:05.510237  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:57:05.510311  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:57:05.518001  607523 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:57:05.584840  607523 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:57:05.585142  607523 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:57:05.657959  607523 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:57:05.658125  607523 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:57:05.658198  607523 kubeadm.go:319] OS: Linux
	I1213 11:57:05.658288  607523 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:57:05.658378  607523 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:57:05.658471  607523 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:57:05.658558  607523 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:57:05.658635  607523 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:57:05.658730  607523 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:57:05.658813  607523 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:57:05.658915  607523 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:57:05.659000  607523 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:57:05.731597  607523 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:57:05.731775  607523 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:57:05.731903  607523 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:57:05.740855  607523 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:57:05.744423  607523 out.go:252]   - Generating certificates and keys ...
	I1213 11:57:05.744578  607523 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:57:05.744679  607523 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:57:05.744796  607523 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 11:57:05.744887  607523 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 11:57:05.744992  607523 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 11:57:05.745076  607523 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 11:57:05.745170  607523 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 11:57:05.745499  607523 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 11:57:05.745582  607523 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 11:57:05.745655  607523 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 11:57:05.745694  607523 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 11:57:05.745749  607523 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:57:05.913677  607523 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:57:06.384962  607523 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:57:07.036559  607523 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:57:07.437110  607523 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:57:07.602655  607523 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:57:07.603483  607523 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:57:07.607251  607523 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:57:07.612344  607523 out.go:252]   - Booting up control plane ...
	I1213 11:57:07.612453  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:57:07.612542  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:57:07.612663  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:57:07.626734  607523 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:57:07.627071  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:57:07.634285  607523 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:57:07.634609  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:57:07.634655  607523 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:57:07.773578  607523 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:57:07.773700  607523 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 12:00:53.918383  603921 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00010332s
	I1213 12:00:53.918411  603921 kubeadm.go:319] 
	I1213 12:00:53.918468  603921 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 12:00:53.918502  603921 kubeadm.go:319] 	- The kubelet is not running
	I1213 12:00:53.918607  603921 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 12:00:53.918611  603921 kubeadm.go:319] 
	I1213 12:00:53.918715  603921 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 12:00:53.918747  603921 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 12:00:53.918778  603921 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 12:00:53.918782  603921 kubeadm.go:319] 
	I1213 12:00:53.924880  603921 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 12:00:53.925344  603921 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 12:00:53.925460  603921 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 12:00:53.925729  603921 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 12:00:53.925740  603921 kubeadm.go:319] 
	I1213 12:00:53.925866  603921 kubeadm.go:403] duration metric: took 8m5.987919453s to StartCluster
	I1213 12:00:53.925907  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:00:53.925972  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:00:53.926107  603921 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 12:00:53.953173  603921 cri.go:89] found id: ""
	I1213 12:00:53.953257  603921 logs.go:282] 0 containers: []
	W1213 12:00:53.953275  603921 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:00:53.953283  603921 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:00:53.953363  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:00:53.984628  603921 cri.go:89] found id: ""
	I1213 12:00:53.984655  603921 logs.go:282] 0 containers: []
	W1213 12:00:53.984665  603921 logs.go:284] No container was found matching "etcd"
	I1213 12:00:53.984671  603921 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:00:53.984731  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:00:54.014942  603921 cri.go:89] found id: ""
	I1213 12:00:54.014969  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.014978  603921 logs.go:284] No container was found matching "coredns"
	I1213 12:00:54.014986  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:00:54.015045  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:00:54.064854  603921 cri.go:89] found id: ""
	I1213 12:00:54.064881  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.064890  603921 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:00:54.064897  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:00:54.064981  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:00:54.132162  603921 cri.go:89] found id: ""
	I1213 12:00:54.132187  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.132195  603921 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:00:54.132201  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:00:54.132311  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:00:54.159680  603921 cri.go:89] found id: ""
	I1213 12:00:54.159703  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.159712  603921 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:00:54.159718  603921 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:00:54.159779  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:00:54.185867  603921 cri.go:89] found id: ""
	I1213 12:00:54.185893  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.185902  603921 logs.go:284] No container was found matching "kindnet"
	I1213 12:00:54.185912  603921 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:00:54.185923  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:00:54.228270  603921 logs.go:123] Gathering logs for container status ...
	I1213 12:00:54.228303  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:00:54.257730  603921 logs.go:123] Gathering logs for kubelet ...
	I1213 12:00:54.257759  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:00:54.324854  603921 logs.go:123] Gathering logs for dmesg ...
	I1213 12:00:54.324892  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:00:54.342225  603921 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:00:54.342252  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:00:54.409722  603921 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:00:54.400901    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.401672    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403289    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403849    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.405570    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:00:54.400901    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.401672    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403289    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403849    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.405570    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:00:54.409752  603921 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00010332s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 12:00:54.409821  603921 out.go:285] * 
	W1213 12:00:54.410005  603921 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00010332s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 12:00:54.410026  603921 out.go:285] * 
	W1213 12:00:54.412399  603921 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 12:00:54.417573  603921 out.go:203] 
	W1213 12:00:54.420481  603921 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00010332s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 12:00:54.420529  603921 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 12:00:54.420553  603921 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 12:00:54.423665  603921 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 11:52:31 no-preload-307409 crio[836]: time="2025-12-13T11:52:31.116588744Z" level=info msg="Image registry.k8s.io/kube-scheduler:v1.35.0-beta.0 not found" id=003f9cb8-ef73-477c-9f7e-cd7904ad42ea name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:31 no-preload-307409 crio[836]: time="2025-12-13T11:52:31.116681922Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-scheduler:v1.35.0-beta.0 found" id=003f9cb8-ef73-477c-9f7e-cd7904ad42ea name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:31 no-preload-307409 crio[836]: time="2025-12-13T11:52:31.779768303Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4462a0b2-6e23-4130-823a-3449eee15424 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:31 no-preload-307409 crio[836]: time="2025-12-13T11:52:31.779939299Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=4462a0b2-6e23-4130-823a-3449eee15424 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:31 no-preload-307409 crio[836]: time="2025-12-13T11:52:31.779997318Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=4462a0b2-6e23-4130-823a-3449eee15424 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:32 no-preload-307409 crio[836]: time="2025-12-13T11:52:32.117107034Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=089154b9-cbe2-4530-82d0-0b41da643c1c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:32 no-preload-307409 crio[836]: time="2025-12-13T11:52:32.11758611Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=089154b9-cbe2-4530-82d0-0b41da643c1c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:32 no-preload-307409 crio[836]: time="2025-12-13T11:52:32.117646903Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=089154b9-cbe2-4530-82d0-0b41da643c1c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:34 no-preload-307409 crio[836]: time="2025-12-13T11:52:34.342232553Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7365090d-a9c7-46f6-8c3c-dc876c1ffcf6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:34 no-preload-307409 crio[836]: time="2025-12-13T11:52:34.342586722Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=7365090d-a9c7-46f6-8c3c-dc876c1ffcf6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:34 no-preload-307409 crio[836]: time="2025-12-13T11:52:34.342639301Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=7365090d-a9c7-46f6-8c3c-dc876c1ffcf6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.33182054Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=43635d89-3bd4-44c2-825f-c8431c65dc6f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.335082522Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=b9aa7c65-27ab-4115-8617-40478e0c4431 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.336915661Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=ba139078-fdf0-4392-91a6-145cf5852d50 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.338604774Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=827565fb-635d-461a-bd67-b5ae5370ff66 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.339721074Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0529a105-853e-48a9-a6a2-0f2cc8e7d4de name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.344733068Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=d292bb3c-e44b-4d74-9c47-e804425ec1f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.347983735Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5383fa2b-ffc4-4de0-8c1f-994389259392 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.616112342Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3027b62a-b474-4ce9-a79a-b73a049c156c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.61769885Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=085f7430-a688-461e-929e-a810830d4d26 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.619174448Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=dcae434f-7a2a-45da-aecd-fe682d69c75c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.620679297Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=9c972166-33b8-4e43-8eb0-69fa78d92d4d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.621515325Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=f406858a-9da8-4255-acef-b33ba48d16bf name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.622872825Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=378a3dd0-9334-4c41-946c-b18ffb0ce982 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.62375966Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=685e82ba-5807-4b97-bc6c-0036cf58fa30 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:00:55.654425    5721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:55.654840    5721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:55.656399    5721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:55.656878    5721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:55.657977    5721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	[Dec13 11:46] overlayfs: idmapped layers are currently not supported
	[ +24.639766] overlayfs: idmapped layers are currently not supported
	[ +18.732422] overlayfs: idmapped layers are currently not supported
	[Dec13 11:47] overlayfs: idmapped layers are currently not supported
	[Dec13 11:48] overlayfs: idmapped layers are currently not supported
	[Dec13 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.618483] overlayfs: idmapped layers are currently not supported
	[Dec13 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.749488] overlayfs: idmapped layers are currently not supported
	[Dec13 11:52] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 12:00:55 up  3:43,  0 user,  load average: 0.90, 1.00, 1.58
	Linux no-preload-307409 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 12:00:53 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:00:54 no-preload-307409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 646.
	Dec 13 12:00:54 no-preload-307409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:00:54 no-preload-307409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:00:54 no-preload-307409 kubelet[5562]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:00:54 no-preload-307409 kubelet[5562]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:00:54 no-preload-307409 kubelet[5562]: E1213 12:00:54.115773    5562 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:00:54 no-preload-307409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:00:54 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:00:54 no-preload-307409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 647.
	Dec 13 12:00:54 no-preload-307409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:00:54 no-preload-307409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:00:54 no-preload-307409 kubelet[5624]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:00:54 no-preload-307409 kubelet[5624]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:00:54 no-preload-307409 kubelet[5624]: E1213 12:00:54.873849    5624 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:00:54 no-preload-307409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:00:54 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:00:55 no-preload-307409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 648.
	Dec 13 12:00:55 no-preload-307409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:00:55 no-preload-307409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:00:55 no-preload-307409 kubelet[5712]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:00:55 no-preload-307409 kubelet[5712]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:00:55 no-preload-307409 kubelet[5712]: E1213 12:00:55.630680    5712 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:00:55 no-preload-307409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:00:55 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-307409 -n no-preload-307409
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-307409 -n no-preload-307409: exit status 6 (341.595766ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 12:00:56.150768  617266 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-307409" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-307409" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (514.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-326948 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-326948 --alsologtostderr -v=1: exit status 80 (2.117593377s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-326948 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:52:32.932385  605888 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:52:32.932562  605888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:32.932587  605888 out.go:374] Setting ErrFile to fd 2...
	I1213 11:52:32.932605  605888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:32.932906  605888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:52:32.934465  605888 out.go:368] Setting JSON to false
	I1213 11:52:32.934530  605888 mustload.go:66] Loading cluster: embed-certs-326948
	I1213 11:52:32.935051  605888 config.go:182] Loaded profile config "embed-certs-326948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:52:32.935597  605888 cli_runner.go:164] Run: docker container inspect embed-certs-326948 --format={{.State.Status}}
	I1213 11:52:32.955643  605888 host.go:66] Checking if "embed-certs-326948" exists ...
	I1213 11:52:32.955959  605888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:52:33.056783  605888 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:76 SystemTime:2025-12-13 11:52:33.040593636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:52:33.057496  605888 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765481609-22101/minikube-v1.37.0-1765481609-22101-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765481609-22101-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-326948 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1213 11:52:33.062999  605888 out.go:179] * Pausing node embed-certs-326948 ... 
	I1213 11:52:33.066404  605888 host.go:66] Checking if "embed-certs-326948" exists ...
	I1213 11:52:33.066758  605888 ssh_runner.go:195] Run: systemctl --version
	I1213 11:52:33.066804  605888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-326948
	I1213 11:52:33.097335  605888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/embed-certs-326948/id_rsa Username:docker}
	I1213 11:52:33.213184  605888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:52:33.231493  605888 pause.go:52] kubelet running: true
	I1213 11:52:33.231594  605888 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 11:52:33.536290  605888 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 11:52:33.536380  605888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 11:52:33.640523  605888 cri.go:89] found id: "520775b2835f58d12a03e4f13fd8b850209d14f35df495d20095ce075d91a77d"
	I1213 11:52:33.640588  605888 cri.go:89] found id: "04543c0f719e7d85c63aa76e5c99b4b6f1b6ec0e2da337c46f1d0d11c624f0ed"
	I1213 11:52:33.640616  605888 cri.go:89] found id: "4b31b7b14f7ea7dae0165197cb4dcc5a91e11968d8fa8b418ffd9a16792f2d11"
	I1213 11:52:33.640640  605888 cri.go:89] found id: "5d10a35acf07003859e6f4a92a7647db98e28eaad48faab459dd989da04b1638"
	I1213 11:52:33.640669  605888 cri.go:89] found id: "793a7623a27a1583339563d46f86b94988bcd8d01c9ee6c3fc5ac20c8cc17b18"
	I1213 11:52:33.640695  605888 cri.go:89] found id: "2f0d882fac60f1616055bed06c1f6058d2f4d9771c371fa9e130d01762278744"
	I1213 11:52:33.640715  605888 cri.go:89] found id: "6dd44e49c88192d0751bf92478d724a6b1aba48c24981c5597a801740be36751"
	I1213 11:52:33.640734  605888 cri.go:89] found id: "5fa45fd0696ef89615d1d81b1bf2769d38c87713975e43422c105cb0d61cfdaa"
	I1213 11:52:33.640753  605888 cri.go:89] found id: "cb833c8e8af6645f23e9e2891cd88798a8d4211065330a18962b7d19db79c7ba"
	I1213 11:52:33.640785  605888 cri.go:89] found id: "b935dfd5f4ab0963b9e8e5cdedc0587e560b4b7330d8a4fc562de7886295f8c9"
	I1213 11:52:33.640811  605888 cri.go:89] found id: "28fd92fb28295293673648bc7a3d13d3ec24a5f53c88319b4e3d85812be1d0da"
	I1213 11:52:33.640837  605888 cri.go:89] found id: ""
	I1213 11:52:33.640931  605888 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 11:52:33.656823  605888 retry.go:31] will retry after 179.841391ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:52:33Z" level=error msg="open /run/runc: no such file or directory"
	I1213 11:52:33.837311  605888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:52:33.853680  605888 pause.go:52] kubelet running: false
	I1213 11:52:33.853789  605888 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 11:52:34.058433  605888 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 11:52:34.058605  605888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 11:52:34.152342  605888 cri.go:89] found id: "520775b2835f58d12a03e4f13fd8b850209d14f35df495d20095ce075d91a77d"
	I1213 11:52:34.152426  605888 cri.go:89] found id: "04543c0f719e7d85c63aa76e5c99b4b6f1b6ec0e2da337c46f1d0d11c624f0ed"
	I1213 11:52:34.152452  605888 cri.go:89] found id: "4b31b7b14f7ea7dae0165197cb4dcc5a91e11968d8fa8b418ffd9a16792f2d11"
	I1213 11:52:34.152470  605888 cri.go:89] found id: "5d10a35acf07003859e6f4a92a7647db98e28eaad48faab459dd989da04b1638"
	I1213 11:52:34.152505  605888 cri.go:89] found id: "793a7623a27a1583339563d46f86b94988bcd8d01c9ee6c3fc5ac20c8cc17b18"
	I1213 11:52:34.152530  605888 cri.go:89] found id: "2f0d882fac60f1616055bed06c1f6058d2f4d9771c371fa9e130d01762278744"
	I1213 11:52:34.152548  605888 cri.go:89] found id: "6dd44e49c88192d0751bf92478d724a6b1aba48c24981c5597a801740be36751"
	I1213 11:52:34.152576  605888 cri.go:89] found id: "5fa45fd0696ef89615d1d81b1bf2769d38c87713975e43422c105cb0d61cfdaa"
	I1213 11:52:34.152594  605888 cri.go:89] found id: "cb833c8e8af6645f23e9e2891cd88798a8d4211065330a18962b7d19db79c7ba"
	I1213 11:52:34.152634  605888 cri.go:89] found id: "b935dfd5f4ab0963b9e8e5cdedc0587e560b4b7330d8a4fc562de7886295f8c9"
	I1213 11:52:34.152651  605888 cri.go:89] found id: "28fd92fb28295293673648bc7a3d13d3ec24a5f53c88319b4e3d85812be1d0da"
	I1213 11:52:34.152668  605888 cri.go:89] found id: ""
	I1213 11:52:34.152767  605888 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 11:52:34.163978  605888 retry.go:31] will retry after 403.193713ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:52:34Z" level=error msg="open /run/runc: no such file or directory"
	I1213 11:52:34.567579  605888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:52:34.586746  605888 pause.go:52] kubelet running: false
	I1213 11:52:34.586858  605888 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 11:52:34.821787  605888 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 11:52:34.821919  605888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 11:52:34.941014  605888 cri.go:89] found id: "520775b2835f58d12a03e4f13fd8b850209d14f35df495d20095ce075d91a77d"
	I1213 11:52:34.941086  605888 cri.go:89] found id: "04543c0f719e7d85c63aa76e5c99b4b6f1b6ec0e2da337c46f1d0d11c624f0ed"
	I1213 11:52:34.941107  605888 cri.go:89] found id: "4b31b7b14f7ea7dae0165197cb4dcc5a91e11968d8fa8b418ffd9a16792f2d11"
	I1213 11:52:34.941127  605888 cri.go:89] found id: "5d10a35acf07003859e6f4a92a7647db98e28eaad48faab459dd989da04b1638"
	I1213 11:52:34.941166  605888 cri.go:89] found id: "793a7623a27a1583339563d46f86b94988bcd8d01c9ee6c3fc5ac20c8cc17b18"
	I1213 11:52:34.941189  605888 cri.go:89] found id: "2f0d882fac60f1616055bed06c1f6058d2f4d9771c371fa9e130d01762278744"
	I1213 11:52:34.941209  605888 cri.go:89] found id: "6dd44e49c88192d0751bf92478d724a6b1aba48c24981c5597a801740be36751"
	I1213 11:52:34.941227  605888 cri.go:89] found id: "5fa45fd0696ef89615d1d81b1bf2769d38c87713975e43422c105cb0d61cfdaa"
	I1213 11:52:34.941258  605888 cri.go:89] found id: "cb833c8e8af6645f23e9e2891cd88798a8d4211065330a18962b7d19db79c7ba"
	I1213 11:52:34.941284  605888 cri.go:89] found id: "b935dfd5f4ab0963b9e8e5cdedc0587e560b4b7330d8a4fc562de7886295f8c9"
	I1213 11:52:34.941303  605888 cri.go:89] found id: "28fd92fb28295293673648bc7a3d13d3ec24a5f53c88319b4e3d85812be1d0da"
	I1213 11:52:34.941367  605888 cri.go:89] found id: ""
	I1213 11:52:34.941447  605888 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 11:52:34.960958  605888 out.go:203] 
	W1213 11:52:34.964023  605888 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:52:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T11:52:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 11:52:34.964046  605888 out.go:285] * 
	* 
	W1213 11:52:34.970278  605888 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 11:52:34.974767  605888 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-326948 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-326948
helpers_test.go:244: (dbg) docker inspect embed-certs-326948:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14",
	        "Created": "2025-12-13T11:50:16.044997755Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 600208,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:51:33.3802045Z",
	            "FinishedAt": "2025-12-13T11:51:31.76267067Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14/hostname",
	        "HostsPath": "/var/lib/docker/containers/4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14/hosts",
	        "LogPath": "/var/lib/docker/containers/4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14/4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14-json.log",
	        "Name": "/embed-certs-326948",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-326948:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-326948",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14",
	                "LowerDir": "/var/lib/docker/overlay2/5ad8a30cfbe144c76a0244f97d4d2c68591d89705a8a98bd566bcd8477b3dd63-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ad8a30cfbe144c76a0244f97d4d2c68591d89705a8a98bd566bcd8477b3dd63/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ad8a30cfbe144c76a0244f97d4d2c68591d89705a8a98bd566bcd8477b3dd63/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ad8a30cfbe144c76a0244f97d4d2c68591d89705a8a98bd566bcd8477b3dd63/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-326948",
	                "Source": "/var/lib/docker/volumes/embed-certs-326948/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-326948",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-326948",
	                "name.minikube.sigs.k8s.io": "embed-certs-326948",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "db2bb4222837e1683856073810bb516689072ab5a31fe5f9a95d933ae7a31120",
	            "SandboxKey": "/var/run/docker/netns/db2bb4222837",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-326948": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:5b:b5:49:7e:60",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5b063c432202ef9f217d4b391af56f96171f14adb917467f7393ca248725893a",
	                    "EndpointID": "9517e70091383b972d818308b553cb68a806bcd2ba74f75934c0ea74636529c1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-326948",
	                        "4fffdfd58e00"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-326948 -n embed-certs-326948
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-326948 -n embed-certs-326948: exit status 2 (441.856328ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-326948 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-326948 logs -n 25: (1.60030006s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-051699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │ 13 Dec 25 11:49 UTC │
	│ image   │ old-k8s-version-051699 image list --format=json                                                                                                                                                                                               │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ pause   │ -p old-k8s-version-051699 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │                     │
	│ delete  │ -p old-k8s-version-051699                                                                                                                                                                                                                     │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ delete  │ -p old-k8s-version-051699                                                                                                                                                                                                                     │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p cert-expiration-420007 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ delete  │ -p cert-expiration-420007                                                                                                                                                                                                                     │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-151605 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-151605 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-151605 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p embed-certs-326948 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │                     │
	│ stop    │ -p embed-certs-326948 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-326948 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:52 UTC │
	│ image   │ default-k8s-diff-port-151605 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p default-k8s-diff-port-151605 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                               │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                               │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p disable-driver-mounts-072590                                                                                                                                                                                                               │ disable-driver-mounts-072590 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ image   │ embed-certs-326948 image list --format=json                                                                                                                                                                                                   │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p embed-certs-326948 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:52:22
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:52:22.177878  603921 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:52:22.177999  603921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:22.178011  603921 out.go:374] Setting ErrFile to fd 2...
	I1213 11:52:22.178016  603921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:22.178255  603921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:52:22.178669  603921 out.go:368] Setting JSON to false
	I1213 11:52:22.179625  603921 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12895,"bootTime":1765613848,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:52:22.179698  603921 start.go:143] virtualization:  
	I1213 11:52:22.183759  603921 out.go:179] * [no-preload-307409] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:52:22.187220  603921 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:52:22.187293  603921 notify.go:221] Checking for updates...
	I1213 11:52:22.194687  603921 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:52:22.202302  603921 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:52:22.205231  603921 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:52:22.208078  603921 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:52:22.210961  603921 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:52:22.214458  603921 config.go:182] Loaded profile config "embed-certs-326948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:52:22.214574  603921 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:52:22.242903  603921 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:52:22.243027  603921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:52:22.310771  603921 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:52:22.30036342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:52:22.310879  603921 docker.go:319] overlay module found
	I1213 11:52:22.315971  603921 out.go:179] * Using the docker driver based on user configuration
	I1213 11:52:22.318784  603921 start.go:309] selected driver: docker
	I1213 11:52:22.318803  603921 start.go:927] validating driver "docker" against <nil>
	I1213 11:52:22.318817  603921 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:52:22.319579  603921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:52:22.380053  603921 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:52:22.371010804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:52:22.380204  603921 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 11:52:22.380437  603921 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:52:22.383329  603921 out.go:179] * Using Docker driver with root privileges
	I1213 11:52:22.386243  603921 cni.go:84] Creating CNI manager for ""
	I1213 11:52:22.386313  603921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:52:22.386327  603921 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 11:52:22.386408  603921 start.go:353] cluster config:
	{Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:52:22.389646  603921 out.go:179] * Starting "no-preload-307409" primary control-plane node in "no-preload-307409" cluster
	I1213 11:52:22.392478  603921 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:52:22.395420  603921 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:52:22.398416  603921 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:22.398505  603921 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:52:22.398545  603921 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/config.json ...
	I1213 11:52:22.398575  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/config.json: {Name:mkec3b7ed172f77da3b248fbbf20fa0dbee47daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:22.400508  603921 cache.go:107] acquiring lock: {Name:mkf4d74369c8245ecb55fb0e29b8225ca9f09ff5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.400655  603921 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 11:52:22.400685  603921 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.853487ms
	I1213 11:52:22.400708  603921 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 11:52:22.400731  603921 cache.go:107] acquiring lock: {Name:mkb6b336872403a4d868a5d769900fdf1066c1c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.401593  603921 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:22.402011  603921 cache.go:107] acquiring lock: {Name:mkafdfd911f389f1e02c51849a66241927a5c213 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.402185  603921 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:22.402473  603921 cache.go:107] acquiring lock: {Name:mk8f79409d2ca53ad062fcf0126f6980a6193bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.402632  603921 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:22.402788  603921 cache.go:107] acquiring lock: {Name:mk4ff965cf9ab0943f63cb9d5079b89d443629ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.402897  603921 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:22.403057  603921 cache.go:107] acquiring lock: {Name:mk2037397f0606151b65f1037a4650bdb91f57be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.403186  603921 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:22.403350  603921 cache.go:107] acquiring lock: {Name:mkcce925699bd9689e329c60f570e109b24fe773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.403414  603921 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1213 11:52:22.403426  603921 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 80.443µs
	I1213 11:52:22.403434  603921 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1213 11:52:22.403457  603921 cache.go:107] acquiring lock: {Name:mk7409e8a480c483310652cd8f23d5f9940a03a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.403493  603921 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1213 11:52:22.403502  603921 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 54.286µs
	I1213 11:52:22.403549  603921 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1213 11:52:22.405169  603921 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:22.405591  603921 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:22.406004  603921 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:22.406392  603921 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:22.406763  603921 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:22.423280  603921 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:52:22.423306  603921 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:52:22.423321  603921 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:52:22.423351  603921 start.go:360] acquireMachinesLock for no-preload-307409: {Name:mk5b591d9d6f446a65ecf56605831e84fbfd4c88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.423450  603921 start.go:364] duration metric: took 84.382µs to acquireMachinesLock for "no-preload-307409"
	I1213 11:52:22.423480  603921 start.go:93] Provisioning new machine with config: &{Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:52:22.423661  603921 start.go:125] createHost starting for "" (driver="docker")
	I1213 11:52:22.429079  603921 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 11:52:22.429336  603921 start.go:159] libmachine.API.Create for "no-preload-307409" (driver="docker")
	I1213 11:52:22.429376  603921 client.go:173] LocalClient.Create starting
	I1213 11:52:22.429452  603921 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem
	I1213 11:52:22.429493  603921 main.go:143] libmachine: Decoding PEM data...
	I1213 11:52:22.429513  603921 main.go:143] libmachine: Parsing certificate...
	I1213 11:52:22.429576  603921 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem
	I1213 11:52:22.429646  603921 main.go:143] libmachine: Decoding PEM data...
	I1213 11:52:22.429666  603921 main.go:143] libmachine: Parsing certificate...
	I1213 11:52:22.430121  603921 cli_runner.go:164] Run: docker network inspect no-preload-307409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 11:52:22.448911  603921 cli_runner.go:211] docker network inspect no-preload-307409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 11:52:22.448997  603921 network_create.go:284] running [docker network inspect no-preload-307409] to gather additional debugging logs...
	I1213 11:52:22.449017  603921 cli_runner.go:164] Run: docker network inspect no-preload-307409
	W1213 11:52:22.468248  603921 cli_runner.go:211] docker network inspect no-preload-307409 returned with exit code 1
	I1213 11:52:22.468284  603921 network_create.go:287] error running [docker network inspect no-preload-307409]: docker network inspect no-preload-307409: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-307409 not found
	I1213 11:52:22.468303  603921 network_create.go:289] output of [docker network inspect no-preload-307409]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-307409 not found
	
	** /stderr **
	I1213 11:52:22.468404  603921 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:52:22.485064  603921 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0545902499c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:32:4c:cb:8d:7b} reservation:<nil>}
	I1213 11:52:22.485424  603921 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-de5fe2fbe3b8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:54:47:7f:e7:3a} reservation:<nil>}
	I1213 11:52:22.485663  603921 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7c96683190e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:0a:60:46:c5:4a} reservation:<nil>}
	I1213 11:52:22.485957  603921 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5b063c432202 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:6c:83:b3:7b:3a} reservation:<nil>}
	I1213 11:52:22.486426  603921 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bb83e0}
	I1213 11:52:22.486448  603921 network_create.go:124] attempt to create docker network no-preload-307409 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 11:52:22.486504  603921 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-307409 no-preload-307409
	I1213 11:52:22.561619  603921 network_create.go:108] docker network no-preload-307409 192.168.85.0/24 created
	I1213 11:52:22.561649  603921 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-307409" container
	I1213 11:52:22.561735  603921 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 11:52:22.577243  603921 cli_runner.go:164] Run: docker volume create no-preload-307409 --label name.minikube.sigs.k8s.io=no-preload-307409 --label created_by.minikube.sigs.k8s.io=true
	I1213 11:52:22.597274  603921 oci.go:103] Successfully created a docker volume no-preload-307409
	I1213 11:52:22.597374  603921 cli_runner.go:164] Run: docker run --rm --name no-preload-307409-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-307409 --entrypoint /usr/bin/test -v no-preload-307409:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 11:52:22.724954  603921 cache.go:162] opening:  /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1213 11:52:22.752376  603921 cache.go:162] opening:  /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1213 11:52:22.778070  603921 cache.go:162] opening:  /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1213 11:52:22.797264  603921 cache.go:162] opening:  /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1213 11:52:22.805390  603921 cache.go:162] opening:  /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1213 11:52:23.209223  603921 cache.go:157] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1213 11:52:23.209301  603921 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 806.245475ms
	I1213 11:52:23.209330  603921 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1213 11:52:23.266936  603921 oci.go:107] Successfully prepared a docker volume no-preload-307409
	I1213 11:52:23.266994  603921 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1213 11:52:23.267122  603921 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 11:52:23.267237  603921 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 11:52:23.342732  603921 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-307409 --name no-preload-307409 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-307409 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-307409 --network no-preload-307409 --ip 192.168.85.2 --volume no-preload-307409:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 11:52:23.695331  603921 cache.go:157] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1213 11:52:23.695405  603921 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.2929342s
	I1213 11:52:23.695435  603921 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 11:52:23.714188  603921 cache.go:157] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1213 11:52:23.714266  603921 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.312276464s
	I1213 11:52:23.714295  603921 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 11:52:23.746751  603921 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Running}}
	I1213 11:52:23.749641  603921 cache.go:157] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1213 11:52:23.749678  603921 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.346893086s
	I1213 11:52:23.749691  603921 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1213 11:52:23.778616  603921 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 11:52:23.802046  603921 cli_runner.go:164] Run: docker exec no-preload-307409 stat /var/lib/dpkg/alternatives/iptables
	I1213 11:52:23.818032  603921 cache.go:157] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1213 11:52:23.818058  603921 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.417329777s
	I1213 11:52:23.818070  603921 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 11:52:23.818085  603921 cache.go:87] Successfully saved all images to host disk.
	I1213 11:52:23.869927  603921 oci.go:144] the created container "no-preload-307409" has a running status.
	I1213 11:52:23.869977  603921 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa...
	I1213 11:52:23.990936  603921 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 11:52:24.020412  603921 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 11:52:24.046398  603921 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 11:52:24.046421  603921 kic_runner.go:114] Args: [docker exec --privileged no-preload-307409 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 11:52:24.114724  603921 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 11:52:24.145665  603921 machine.go:94] provisionDockerMachine start ...
	I1213 11:52:24.145765  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:24.178680  603921 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:24.179021  603921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1213 11:52:24.179031  603921 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:52:24.179772  603921 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 11:52:27.331003  603921 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-307409
	
	I1213 11:52:27.331028  603921 ubuntu.go:182] provisioning hostname "no-preload-307409"
	I1213 11:52:27.331091  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:27.350635  603921 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:27.351104  603921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1213 11:52:27.351127  603921 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-307409 && echo "no-preload-307409" | sudo tee /etc/hostname
	I1213 11:52:27.517546  603921 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-307409
	
	I1213 11:52:27.517640  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:27.537725  603921 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:27.538047  603921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1213 11:52:27.538069  603921 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-307409' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-307409/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-307409' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:52:27.687673  603921 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:52:27.687762  603921 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 11:52:27.687826  603921 ubuntu.go:190] setting up certificates
	I1213 11:52:27.687859  603921 provision.go:84] configureAuth start
	I1213 11:52:27.687988  603921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 11:52:27.704465  603921 provision.go:143] copyHostCerts
	I1213 11:52:27.704533  603921 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 11:52:27.704542  603921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 11:52:27.704618  603921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 11:52:27.704711  603921 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 11:52:27.704717  603921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 11:52:27.704742  603921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 11:52:27.704793  603921 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 11:52:27.704798  603921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 11:52:27.704821  603921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 11:52:27.704870  603921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.no-preload-307409 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-307409]
	I1213 11:52:27.799233  603921 provision.go:177] copyRemoteCerts
	I1213 11:52:27.799303  603921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:52:27.799354  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:27.816072  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 11:52:27.919366  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:52:27.939339  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:52:27.957398  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:52:27.977029  603921 provision.go:87] duration metric: took 289.128062ms to configureAuth
	I1213 11:52:27.977059  603921 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:52:27.977329  603921 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:52:27.977459  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:27.994988  603921 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:27.995311  603921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1213 11:52:27.995346  603921 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:52:28.387156  603921 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:52:28.387181  603921 machine.go:97] duration metric: took 4.241492519s to provisionDockerMachine
	I1213 11:52:28.387193  603921 client.go:176] duration metric: took 5.957805202s to LocalClient.Create
	I1213 11:52:28.387207  603921 start.go:167] duration metric: took 5.957873469s to libmachine.API.Create "no-preload-307409"
	I1213 11:52:28.387215  603921 start.go:293] postStartSetup for "no-preload-307409" (driver="docker")
	I1213 11:52:28.387226  603921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:52:28.387291  603921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:52:28.387336  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:28.404972  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 11:52:28.515880  603921 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:52:28.519219  603921 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:52:28.519251  603921 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:52:28.519263  603921 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 11:52:28.519320  603921 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 11:52:28.519410  603921 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 11:52:28.519562  603921 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:52:28.526963  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:52:28.545859  603921 start.go:296] duration metric: took 158.63039ms for postStartSetup
	I1213 11:52:28.546269  603921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 11:52:28.571235  603921 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/config.json ...
	I1213 11:52:28.571559  603921 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:52:28.571611  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:28.589707  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 11:52:28.696545  603921 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:52:28.701185  603921 start.go:128] duration metric: took 6.27750586s to createHost
	I1213 11:52:28.701209  603921 start.go:83] releasing machines lock for "no-preload-307409", held for 6.27775003s
	I1213 11:52:28.701287  603921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 11:52:28.718595  603921 ssh_runner.go:195] Run: cat /version.json
	I1213 11:52:28.718648  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:28.718908  603921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:52:28.718966  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:28.745537  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 11:52:28.751810  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 11:52:28.847455  603921 ssh_runner.go:195] Run: systemctl --version
	I1213 11:52:28.969867  603921 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:52:29.010183  603921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:52:29.014670  603921 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:52:29.014799  603921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:52:29.046386  603921 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 11:52:29.046411  603921 start.go:496] detecting cgroup driver to use...
	I1213 11:52:29.046444  603921 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:52:29.046493  603921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:52:29.064822  603921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:52:29.078520  603921 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:52:29.078608  603921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:52:29.096990  603921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:52:29.116180  603921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:52:29.242070  603921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:52:29.378676  603921 docker.go:234] disabling docker service ...
	I1213 11:52:29.378760  603921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:52:29.401781  603921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:52:29.417362  603921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:52:29.558549  603921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:52:29.695156  603921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:52:29.709160  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:52:29.724923  603921 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 11:52:29.725028  603921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:29.733811  603921 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:52:29.733884  603921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:29.742902  603921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:29.752357  603921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:29.761431  603921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:52:29.770783  603921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:29.779375  603921 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:29.793009  603921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:29.802451  603921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:52:29.811164  603921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:52:29.818609  603921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:52:29.942303  603921 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:52:30.130461  603921 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:52:30.130567  603921 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:52:30.135067  603921 start.go:564] Will wait 60s for crictl version
	I1213 11:52:30.135148  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.139648  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:52:30.167916  603921 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:52:30.168057  603921 ssh_runner.go:195] Run: crio --version
	I1213 11:52:30.201235  603921 ssh_runner.go:195] Run: crio --version
	I1213 11:52:30.240166  603921 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 11:52:30.243017  603921 cli_runner.go:164] Run: docker network inspect no-preload-307409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:52:30.259990  603921 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 11:52:30.264096  603921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:52:30.274510  603921 kubeadm.go:884] updating cluster {Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:52:30.274625  603921 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:30.274673  603921 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:52:30.299868  603921 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1213 11:52:30.299895  603921 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 11:52:30.299939  603921 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:52:30.300144  603921 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:30.300228  603921 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:30.300318  603921 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:30.300422  603921 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:30.300512  603921 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1213 11:52:30.300599  603921 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1213 11:52:30.300694  603921 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:30.301694  603921 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:30.301935  603921 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:30.302103  603921 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1213 11:52:30.302258  603921 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:52:30.302557  603921 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1213 11:52:30.302733  603921 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:30.302971  603921 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:30.303142  603921 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:30.527419  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:30.555499  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1213 11:52:30.570567  603921 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1213 11:52:30.570662  603921 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:30.570728  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.584640  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:30.591270  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:30.595713  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1213 11:52:30.616170  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:30.619381  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:30.622807  603921 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1213 11:52:30.622860  603921 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1213 11:52:30.622946  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.623055  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:30.710823  603921 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1213 11:52:30.710983  603921 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:30.711082  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.710930  603921 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1213 11:52:30.711200  603921 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:30.711241  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.736060  603921 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1213 11:52:30.736163  603921 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1213 11:52:30.736234  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.740106  603921 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1213 11:52:30.740189  603921 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:30.740262  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.748359  603921 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1213 11:52:30.748463  603921 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:30.748511  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:30.748555  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.748628  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 11:52:30.748683  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:30.748738  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:30.748788  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 11:52:30.748845  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:30.856302  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:30.856487  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:30.856521  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:30.856573  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 11:52:30.856627  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 11:52:30.856653  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:30.856693  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:30.971700  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1213 11:52:30.971783  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:30.971816  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:30.971845  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 11:52:30.971874  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 11:52:30.971903  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:30.971935  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:30.972193  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1213 11:52:31.074055  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1213 11:52:31.074094  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1213 11:52:31.074184  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1213 11:52:31.074205  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1213 11:52:31.074277  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1213 11:52:31.074302  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1213 11:52:31.074328  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1213 11:52:31.074347  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1213 11:52:31.074371  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1213 11:52:31.074278  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1213 11:52:31.074412  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:31.074438  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1213 11:52:31.074484  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1213 11:52:31.112864  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1213 11:52:31.112902  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1213 11:52:31.112967  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1213 11:52:31.112980  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1213 11:52:31.123045  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1213 11:52:31.123083  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1213 11:52:31.123160  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1213 11:52:31.123177  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	W1213 11:52:31.139055  603921 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1213 11:52:31.139150  603921 retry.go:31] will retry after 147.135859ms: ssh: rejected: connect failed (open failed)
	I1213 11:52:31.139250  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1213 11:52:31.139295  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1213 11:52:31.139384  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:31.139650  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1213 11:52:31.139777  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1213 11:52:31.139869  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:31.188122  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 11:52:31.202626  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 11:52:31.288550  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:31.364969  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1213 11:52:31.365219  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1213 11:52:31.396116  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	W1213 11:52:31.547454  603921 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1213 11:52:31.547789  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:52:31.674247  603921 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1213 11:52:31.674310  603921 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:52:31.674373  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:31.683142  603921 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1213 11:52:31.683265  603921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1213 11:52:31.693453  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:52:32.082334  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1213 11:52:32.082370  603921 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1213 11:52:32.082422  603921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1213 11:52:32.082516  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Dec 13 11:52:15 embed-certs-326948 crio[653]: time="2025-12-13T11:52:15.866576142Z" level=info msg="Removed container 41ecc256171f3cb32f56cc3f1444214820639a6479377db1abbe69ce0b3643d2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8mjg/dashboard-metrics-scraper" id=202bad14-9551-47b2-9861-1061d83bd0f5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 11:52:17 embed-certs-326948 conmon[1156]: conmon 4b31b7b14f7ea7dae016 <ninfo>: container 1158 exited with status 1
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.867854973Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2518d63f-4e30-4823-a6ab-20bf2a36bcb1 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.877355453Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9c6f0262-10a7-4109-90c7-a1e9aea959da name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.879143815Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=c6a0893b-d5d2-47ac-b1cd-0a3ab630eabc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.879341831Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.88821788Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.888417356Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b1e671f204c660706266d89cab673326d386f074d90d85cac190ac9b11e8adc0/merged/etc/passwd: no such file or directory"
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.88844015Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b1e671f204c660706266d89cab673326d386f074d90d85cac190ac9b11e8adc0/merged/etc/group: no such file or directory"
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.897649469Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.921671719Z" level=info msg="Created container 520775b2835f58d12a03e4f13fd8b850209d14f35df495d20095ce075d91a77d: kube-system/storage-provisioner/storage-provisioner" id=c6a0893b-d5d2-47ac-b1cd-0a3ab630eabc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.922938297Z" level=info msg="Starting container: 520775b2835f58d12a03e4f13fd8b850209d14f35df495d20095ce075d91a77d" id=4b28e0e5-d071-495e-a14e-33ebc386c960 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.926858352Z" level=info msg="Started container" PID=1642 containerID=520775b2835f58d12a03e4f13fd8b850209d14f35df495d20095ce075d91a77d description=kube-system/storage-provisioner/storage-provisioner id=4b28e0e5-d071-495e-a14e-33ebc386c960 name=/runtime.v1.RuntimeService/StartContainer sandboxID=439671e7fbf85c257ab0e7f0bd0330beccbaa1e43eb6b758797b2e918363e262
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.440636973Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.446263749Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.446312627Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.446576572Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.449886587Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.449921418Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.449946534Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.454443767Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.454588991Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.454663018Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.460750754Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.460909369Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	520775b2835f5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           18 seconds ago      Running             storage-provisioner         2                   439671e7fbf85       storage-provisioner                          kube-system
	b935dfd5f4ab0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   028bd2ab590f7       dashboard-metrics-scraper-6ffb444bf9-x8mjg   kubernetes-dashboard
	28fd92fb28295       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago      Running             kubernetes-dashboard        0                   862ff0cb1bc21       kubernetes-dashboard-855c9754f9-s4wkb        kubernetes-dashboard
	04543c0f719e7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           49 seconds ago      Running             coredns                     1                   31da83bf9c0cb       coredns-66bc5c9577-459p2                     kube-system
	ccccb8d28996f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   de188ff0a5eb9       busybox                                      default
	4b31b7b14f7ea       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           49 seconds ago      Exited              storage-provisioner         1                   439671e7fbf85       storage-provisioner                          kube-system
	5d10a35acf070       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                           49 seconds ago      Running             kube-proxy                  1                   d3e4f4dfe4a32       kube-proxy-5thrz                             kube-system
	793a7623a27a1       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           49 seconds ago      Running             kindnet-cni                 1                   77c86af7b1a42       kindnet-q82mh                                kube-system
	2f0d882fac60f       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                           54 seconds ago      Running             kube-scheduler              1                   1ff0d9e73d215       kube-scheduler-embed-certs-326948            kube-system
	6dd44e49c8819       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                           55 seconds ago      Running             etcd                        1                   0a301115e9714       etcd-embed-certs-326948                      kube-system
	5fa45fd0696ef       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                           55 seconds ago      Running             kube-apiserver              1                   fe63963346016       kube-apiserver-embed-certs-326948            kube-system
	cb833c8e8af66       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                           55 seconds ago      Running             kube-controller-manager     1                   e65348fb8c69a       kube-controller-manager-embed-certs-326948   kube-system
	
	
	==> coredns [04543c0f719e7d85c63aa76e5c99b4b6f1b6ec0e2da337c46f1d0d11c624f0ed] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45231 - 48121 "HINFO IN 1039032509713243738.9011842995951996538. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.050725592s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-326948
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-326948
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
	                    minikube.k8s.io/name=embed-certs-326948
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T11_50_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 11:50:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-326948
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 11:52:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 11:52:16 +0000   Sat, 13 Dec 2025 11:50:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 11:52:16 +0000   Sat, 13 Dec 2025 11:50:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 11:52:16 +0000   Sat, 13 Dec 2025 11:50:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 11:52:16 +0000   Sat, 13 Dec 2025 11:51:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-326948
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                649dcd43-7d72-42de-9a4b-6b3667428bbb
	  Boot ID:                    9bd24839-35d9-4392-a0e0-b2e0b9823eaa
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-459p2                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     106s
	  kube-system                 etcd-embed-certs-326948                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         111s
	  kube-system                 kindnet-q82mh                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-embed-certs-326948             250m (12%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-embed-certs-326948    200m (10%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-5thrz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-embed-certs-326948             100m (5%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-x8mjg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-s4wkb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 105s               kube-proxy       
	  Normal   Starting                 48s                kube-proxy       
	  Warning  CgroupV1                 2m                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node embed-certs-326948 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node embed-certs-326948 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m (x8 over 2m)    kubelet          Node embed-certs-326948 status is now: NodeHasSufficientPID
	  Normal   Starting                 112s               kubelet          Starting kubelet.
	  Warning  CgroupV1                 112s               kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  111s               kubelet          Node embed-certs-326948 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    111s               kubelet          Node embed-certs-326948 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     111s               kubelet          Node embed-certs-326948 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           107s               node-controller  Node embed-certs-326948 event: Registered Node embed-certs-326948 in Controller
	  Normal   NodeReady                93s                kubelet          Node embed-certs-326948 status is now: NodeReady
	  Normal   Starting                 56s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 56s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node embed-certs-326948 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node embed-certs-326948 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node embed-certs-326948 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           47s                node-controller  Node embed-certs-326948 event: Registered Node embed-certs-326948 in Controller
	
	
	==> dmesg <==
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	[Dec13 11:46] overlayfs: idmapped layers are currently not supported
	[ +24.639766] overlayfs: idmapped layers are currently not supported
	[ +18.732422] overlayfs: idmapped layers are currently not supported
	[Dec13 11:47] overlayfs: idmapped layers are currently not supported
	[Dec13 11:48] overlayfs: idmapped layers are currently not supported
	[Dec13 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.618483] overlayfs: idmapped layers are currently not supported
	[Dec13 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.749488] overlayfs: idmapped layers are currently not supported
	[Dec13 11:52] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6dd44e49c88192d0751bf92478d724a6b1aba48c24981c5597a801740be36751] <==
	{"level":"warn","ts":"2025-12-13T11:51:43.962517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.023766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.081775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.108830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.147729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.183736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.224637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.264074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.305916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.344514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.395682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.468474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.511625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.543425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.584273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.618688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.656731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.699842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.738086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.773057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.815040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.872144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.888256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.930177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:45.071595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37482","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:52:36 up  3:35,  0 user,  load average: 2.68, 2.69, 2.29
	Linux embed-certs-326948 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [793a7623a27a1583339563d46f86b94988bcd8d01c9ee6c3fc5ac20c8cc17b18] <==
	I1213 11:51:47.247349       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 11:51:47.324697       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1213 11:51:47.324846       1 main.go:148] setting mtu 1500 for CNI 
	I1213 11:51:47.324858       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 11:51:47.324869       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T11:51:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 11:51:47.520241       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 11:51:47.520265       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 11:51:47.520274       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 11:51:47.520413       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1213 11:52:17.523500       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1213 11:52:17.523678       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1213 11:52:17.523747       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1213 11:52:17.523758       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1213 11:52:18.920519       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 11:52:18.920705       1 metrics.go:72] Registering metrics
	I1213 11:52:18.920807       1 controller.go:711] "Syncing nftables rules"
	I1213 11:52:27.440222       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 11:52:27.440352       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5fa45fd0696ef89615d1d81b1bf2769d38c87713975e43422c105cb0d61cfdaa] <==
	I1213 11:51:46.165681       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 11:51:46.198994       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 11:51:46.225642       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 11:51:46.225841       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 11:51:46.229822       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 11:51:46.229866       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 11:51:46.239867       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 11:51:46.239900       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 11:51:46.240709       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 11:51:46.240828       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1213 11:51:46.271441       1 cache.go:39] Caches are synced for autoregister controller
	I1213 11:51:46.294304       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1213 11:51:46.294421       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 11:51:46.294510       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1213 11:51:46.657235       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 11:51:46.936122       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 11:51:47.352952       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 11:51:47.478114       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 11:51:47.526295       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 11:51:47.554180       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 11:51:47.656610       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.32.186"}
	I1213 11:51:47.674491       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.238.29"}
	I1213 11:51:49.400724       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 11:51:49.784264       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 11:51:49.999277       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [cb833c8e8af6645f23e9e2891cd88798a8d4211065330a18962b7d19db79c7ba] <==
	I1213 11:51:49.415063       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 11:51:49.415070       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 11:51:49.422491       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1213 11:51:49.422600       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1213 11:51:49.422711       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-326948"
	I1213 11:51:49.422763       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1213 11:51:49.423242       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 11:51:49.425313       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1213 11:51:49.427670       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 11:51:49.427828       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1213 11:51:49.427855       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 11:51:49.427951       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 11:51:49.428118       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 11:51:49.428575       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 11:51:49.429471       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1213 11:51:49.433113       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 11:51:49.440217       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1213 11:51:49.440377       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 11:51:49.440469       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 11:51:49.440521       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1213 11:51:49.440591       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 11:51:49.449849       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 11:51:49.452824       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 11:51:49.455059       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 11:51:49.461991       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [5d10a35acf07003859e6f4a92a7647db98e28eaad48faab459dd989da04b1638] <==
	I1213 11:51:47.392717       1 server_linux.go:53] "Using iptables proxy"
	I1213 11:51:47.571245       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 11:51:47.671671       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 11:51:47.672587       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1213 11:51:47.672712       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 11:51:47.741948       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 11:51:47.742087       1 server_linux.go:132] "Using iptables Proxier"
	I1213 11:51:47.746335       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 11:51:47.746704       1 server.go:527] "Version info" version="v1.34.2"
	I1213 11:51:47.746861       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:51:47.748352       1 config.go:200] "Starting service config controller"
	I1213 11:51:47.748417       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 11:51:47.748460       1 config.go:106] "Starting endpoint slice config controller"
	I1213 11:51:47.748488       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 11:51:47.748540       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 11:51:47.748569       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 11:51:47.749212       1 config.go:309] "Starting node config controller"
	I1213 11:51:47.751702       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 11:51:47.751796       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 11:51:47.849715       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 11:51:47.850089       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 11:51:47.851708       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2f0d882fac60f1616055bed06c1f6058d2f4d9771c371fa9e130d01762278744] <==
	I1213 11:51:44.127157       1 serving.go:386] Generated self-signed cert in-memory
	W1213 11:51:46.071992       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 11:51:46.072022       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 11:51:46.072041       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 11:51:46.072049       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 11:51:46.224603       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 11:51:46.225746       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:51:46.232276       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:51:46.235094       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:51:46.236031       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 11:51:46.237892       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 11:51:46.341166       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 11:51:49 embed-certs-326948 kubelet[780]: I1213 11:51:49.000932     780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 13 11:51:50 embed-certs-326948 kubelet[780]: I1213 11:51:50.059705     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/97973cdf-f52e-4441-a054-20360ea34720-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-s4wkb\" (UID: \"97973cdf-f52e-4441-a054-20360ea34720\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s4wkb"
	Dec 13 11:51:50 embed-certs-326948 kubelet[780]: I1213 11:51:50.059771     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d18eafb1-f364-4420-88c9-b4b573fb4f27-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-x8mjg\" (UID: \"d18eafb1-f364-4420-88c9-b4b573fb4f27\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8mjg"
	Dec 13 11:51:50 embed-certs-326948 kubelet[780]: I1213 11:51:50.059800     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8ckx\" (UniqueName: \"kubernetes.io/projected/97973cdf-f52e-4441-a054-20360ea34720-kube-api-access-h8ckx\") pod \"kubernetes-dashboard-855c9754f9-s4wkb\" (UID: \"97973cdf-f52e-4441-a054-20360ea34720\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s4wkb"
	Dec 13 11:51:50 embed-certs-326948 kubelet[780]: I1213 11:51:50.059825     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjj6r\" (UniqueName: \"kubernetes.io/projected/d18eafb1-f364-4420-88c9-b4b573fb4f27-kube-api-access-rjj6r\") pod \"dashboard-metrics-scraper-6ffb444bf9-x8mjg\" (UID: \"d18eafb1-f364-4420-88c9-b4b573fb4f27\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8mjg"
	Dec 13 11:51:50 embed-certs-326948 kubelet[780]: W1213 11:51:50.341634     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14/crio-028bd2ab590f73966bb502ea2da6090b9d2cceca6394b199bcbfd330569179e4 WatchSource:0}: Error finding container 028bd2ab590f73966bb502ea2da6090b9d2cceca6394b199bcbfd330569179e4: Status 404 returned error can't find the container with id 028bd2ab590f73966bb502ea2da6090b9d2cceca6394b199bcbfd330569179e4
	Dec 13 11:51:58 embed-certs-326948 kubelet[780]: I1213 11:51:58.214944     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s4wkb" podStartSLOduration=4.340572183 podStartE2EDuration="9.213770715s" podCreationTimestamp="2025-12-13 11:51:49 +0000 UTC" firstStartedPulling="2025-12-13 11:51:50.306730234 +0000 UTC m=+9.850424787" lastFinishedPulling="2025-12-13 11:51:55.179928684 +0000 UTC m=+14.723623319" observedRunningTime="2025-12-13 11:51:55.836585148 +0000 UTC m=+15.380279701" watchObservedRunningTime="2025-12-13 11:51:58.213770715 +0000 UTC m=+17.757465276"
	Dec 13 11:51:59 embed-certs-326948 kubelet[780]: I1213 11:51:59.801657     780 scope.go:117] "RemoveContainer" containerID="71542f304a4d7e0c81c872c48e086c4fcbc59f365730cc4a878c6dcfaf95d68f"
	Dec 13 11:52:00 embed-certs-326948 kubelet[780]: I1213 11:52:00.804658     780 scope.go:117] "RemoveContainer" containerID="41ecc256171f3cb32f56cc3f1444214820639a6479377db1abbe69ce0b3643d2"
	Dec 13 11:52:00 embed-certs-326948 kubelet[780]: E1213 11:52:00.804794     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8mjg_kubernetes-dashboard(d18eafb1-f364-4420-88c9-b4b573fb4f27)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8mjg" podUID="d18eafb1-f364-4420-88c9-b4b573fb4f27"
	Dec 13 11:52:00 embed-certs-326948 kubelet[780]: I1213 11:52:00.807488     780 scope.go:117] "RemoveContainer" containerID="71542f304a4d7e0c81c872c48e086c4fcbc59f365730cc4a878c6dcfaf95d68f"
	Dec 13 11:52:01 embed-certs-326948 kubelet[780]: I1213 11:52:01.808982     780 scope.go:117] "RemoveContainer" containerID="41ecc256171f3cb32f56cc3f1444214820639a6479377db1abbe69ce0b3643d2"
	Dec 13 11:52:01 embed-certs-326948 kubelet[780]: E1213 11:52:01.809681     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8mjg_kubernetes-dashboard(d18eafb1-f364-4420-88c9-b4b573fb4f27)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8mjg" podUID="d18eafb1-f364-4420-88c9-b4b573fb4f27"
	Dec 13 11:52:03 embed-certs-326948 kubelet[780]: I1213 11:52:03.473392     780 scope.go:117] "RemoveContainer" containerID="41ecc256171f3cb32f56cc3f1444214820639a6479377db1abbe69ce0b3643d2"
	Dec 13 11:52:03 embed-certs-326948 kubelet[780]: E1213 11:52:03.473580     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8mjg_kubernetes-dashboard(d18eafb1-f364-4420-88c9-b4b573fb4f27)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8mjg" podUID="d18eafb1-f364-4420-88c9-b4b573fb4f27"
	Dec 13 11:52:15 embed-certs-326948 kubelet[780]: I1213 11:52:15.682184     780 scope.go:117] "RemoveContainer" containerID="41ecc256171f3cb32f56cc3f1444214820639a6479377db1abbe69ce0b3643d2"
	Dec 13 11:52:15 embed-certs-326948 kubelet[780]: I1213 11:52:15.844623     780 scope.go:117] "RemoveContainer" containerID="41ecc256171f3cb32f56cc3f1444214820639a6479377db1abbe69ce0b3643d2"
	Dec 13 11:52:15 embed-certs-326948 kubelet[780]: I1213 11:52:15.845323     780 scope.go:117] "RemoveContainer" containerID="b935dfd5f4ab0963b9e8e5cdedc0587e560b4b7330d8a4fc562de7886295f8c9"
	Dec 13 11:52:15 embed-certs-326948 kubelet[780]: E1213 11:52:15.845671     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8mjg_kubernetes-dashboard(d18eafb1-f364-4420-88c9-b4b573fb4f27)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8mjg" podUID="d18eafb1-f364-4420-88c9-b4b573fb4f27"
	Dec 13 11:52:17 embed-certs-326948 kubelet[780]: I1213 11:52:17.855404     780 scope.go:117] "RemoveContainer" containerID="4b31b7b14f7ea7dae0165197cb4dcc5a91e11968d8fa8b418ffd9a16792f2d11"
	Dec 13 11:52:23 embed-certs-326948 kubelet[780]: I1213 11:52:23.473702     780 scope.go:117] "RemoveContainer" containerID="b935dfd5f4ab0963b9e8e5cdedc0587e560b4b7330d8a4fc562de7886295f8c9"
	Dec 13 11:52:23 embed-certs-326948 kubelet[780]: E1213 11:52:23.473926     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8mjg_kubernetes-dashboard(d18eafb1-f364-4420-88c9-b4b573fb4f27)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8mjg" podUID="d18eafb1-f364-4420-88c9-b4b573fb4f27"
	Dec 13 11:52:33 embed-certs-326948 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 11:52:33 embed-certs-326948 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 11:52:33 embed-certs-326948 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [28fd92fb28295293673648bc7a3d13d3ec24a5f53c88319b4e3d85812be1d0da] <==
	2025/12/13 11:51:55 Using namespace: kubernetes-dashboard
	2025/12/13 11:51:55 Using in-cluster config to connect to apiserver
	2025/12/13 11:51:55 Using secret token for csrf signing
	2025/12/13 11:51:55 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 11:51:55 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 11:51:55 Successful initial request to the apiserver, version: v1.34.2
	2025/12/13 11:51:55 Generating JWE encryption key
	2025/12/13 11:51:55 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 11:51:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 11:51:56 Initializing JWE encryption key from synchronized object
	2025/12/13 11:51:56 Creating in-cluster Sidecar client
	2025/12/13 11:51:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 11:51:56 Serving insecurely on HTTP port: 9090
	2025/12/13 11:52:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 11:51:55 Starting overwatch
	
	
	==> storage-provisioner [4b31b7b14f7ea7dae0165197cb4dcc5a91e11968d8fa8b418ffd9a16792f2d11] <==
	I1213 11:51:47.373830       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 11:52:17.377192       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [520775b2835f58d12a03e4f13fd8b850209d14f35df495d20095ce075d91a77d] <==
	I1213 11:52:17.944483       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 11:52:17.977770       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 11:52:17.977923       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 11:52:17.983431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:21.439303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:25.699949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:29.298187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:32.352697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:35.375413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:35.383861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 11:52:35.384096       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 11:52:35.384358       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-326948_9996b2df-3786-4199-82a7-e41e9eb25230!
	I1213 11:52:35.386285       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"23390c62-23fe-4c67-a69c-5011159a5f31", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-326948_9996b2df-3786-4199-82a7-e41e9eb25230 became leader
	W1213 11:52:35.398466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:35.415407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 11:52:35.485083       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-326948_9996b2df-3786-4199-82a7-e41e9eb25230!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-326948 -n embed-certs-326948
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-326948 -n embed-certs-326948: exit status 2 (520.896374ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-326948 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-326948
helpers_test.go:244: (dbg) docker inspect embed-certs-326948:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14",
	        "Created": "2025-12-13T11:50:16.044997755Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 600208,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:51:33.3802045Z",
	            "FinishedAt": "2025-12-13T11:51:31.76267067Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14/hostname",
	        "HostsPath": "/var/lib/docker/containers/4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14/hosts",
	        "LogPath": "/var/lib/docker/containers/4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14/4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14-json.log",
	        "Name": "/embed-certs-326948",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-326948:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-326948",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14",
	                "LowerDir": "/var/lib/docker/overlay2/5ad8a30cfbe144c76a0244f97d4d2c68591d89705a8a98bd566bcd8477b3dd63-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ad8a30cfbe144c76a0244f97d4d2c68591d89705a8a98bd566bcd8477b3dd63/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ad8a30cfbe144c76a0244f97d4d2c68591d89705a8a98bd566bcd8477b3dd63/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ad8a30cfbe144c76a0244f97d4d2c68591d89705a8a98bd566bcd8477b3dd63/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-326948",
	                "Source": "/var/lib/docker/volumes/embed-certs-326948/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-326948",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-326948",
	                "name.minikube.sigs.k8s.io": "embed-certs-326948",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "db2bb4222837e1683856073810bb516689072ab5a31fe5f9a95d933ae7a31120",
	            "SandboxKey": "/var/run/docker/netns/db2bb4222837",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-326948": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:5b:b5:49:7e:60",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5b063c432202ef9f217d4b391af56f96171f14adb917467f7393ca248725893a",
	                    "EndpointID": "9517e70091383b972d818308b553cb68a806bcd2ba74f75934c0ea74636529c1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-326948",
	                        "4fffdfd58e00"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-326948 -n embed-certs-326948
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-326948 -n embed-certs-326948: exit status 2 (519.844444ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-326948 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-326948 logs -n 25: (1.552261149s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-051699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:48 UTC │ 13 Dec 25 11:49 UTC │
	│ image   │ old-k8s-version-051699 image list --format=json                                                                                                                                                                                               │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ pause   │ -p old-k8s-version-051699 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │                     │
	│ delete  │ -p old-k8s-version-051699                                                                                                                                                                                                                     │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ delete  │ -p old-k8s-version-051699                                                                                                                                                                                                                     │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p cert-expiration-420007 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ delete  │ -p cert-expiration-420007                                                                                                                                                                                                                     │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-151605 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-151605 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-151605 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p embed-certs-326948 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │                     │
	│ stop    │ -p embed-certs-326948 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-326948 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:52 UTC │
	│ image   │ default-k8s-diff-port-151605 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p default-k8s-diff-port-151605 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                               │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                               │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p disable-driver-mounts-072590                                                                                                                                                                                                               │ disable-driver-mounts-072590 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ image   │ embed-certs-326948 image list --format=json                                                                                                                                                                                                   │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p embed-certs-326948 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:52:22
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:52:22.177878  603921 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:52:22.177999  603921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:22.178011  603921 out.go:374] Setting ErrFile to fd 2...
	I1213 11:52:22.178016  603921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:22.178255  603921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:52:22.178669  603921 out.go:368] Setting JSON to false
	I1213 11:52:22.179625  603921 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12895,"bootTime":1765613848,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:52:22.179698  603921 start.go:143] virtualization:  
	I1213 11:52:22.183759  603921 out.go:179] * [no-preload-307409] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:52:22.187220  603921 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:52:22.187293  603921 notify.go:221] Checking for updates...
	I1213 11:52:22.194687  603921 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:52:22.202302  603921 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:52:22.205231  603921 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:52:22.208078  603921 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:52:22.210961  603921 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:52:22.214458  603921 config.go:182] Loaded profile config "embed-certs-326948": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:52:22.214574  603921 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:52:22.242903  603921 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:52:22.243027  603921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:52:22.310771  603921 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:52:22.30036342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:52:22.310879  603921 docker.go:319] overlay module found
	I1213 11:52:22.315971  603921 out.go:179] * Using the docker driver based on user configuration
	I1213 11:52:22.318784  603921 start.go:309] selected driver: docker
	I1213 11:52:22.318803  603921 start.go:927] validating driver "docker" against <nil>
	I1213 11:52:22.318817  603921 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:52:22.319579  603921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:52:22.380053  603921 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:52:22.371010804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:52:22.380204  603921 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 11:52:22.380437  603921 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 11:52:22.383329  603921 out.go:179] * Using Docker driver with root privileges
	I1213 11:52:22.386243  603921 cni.go:84] Creating CNI manager for ""
	I1213 11:52:22.386313  603921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:52:22.386327  603921 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 11:52:22.386408  603921 start.go:353] cluster config:
	{Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:52:22.389646  603921 out.go:179] * Starting "no-preload-307409" primary control-plane node in "no-preload-307409" cluster
	I1213 11:52:22.392478  603921 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:52:22.395420  603921 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:52:22.398416  603921 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:22.398505  603921 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:52:22.398545  603921 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/config.json ...
	I1213 11:52:22.398575  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/config.json: {Name:mkec3b7ed172f77da3b248fbbf20fa0dbee47daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:22.400508  603921 cache.go:107] acquiring lock: {Name:mkf4d74369c8245ecb55fb0e29b8225ca9f09ff5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.400655  603921 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 11:52:22.400685  603921 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.853487ms
	I1213 11:52:22.400708  603921 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 11:52:22.400731  603921 cache.go:107] acquiring lock: {Name:mkb6b336872403a4d868a5d769900fdf1066c1c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.401593  603921 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:22.402011  603921 cache.go:107] acquiring lock: {Name:mkafdfd911f389f1e02c51849a66241927a5c213 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.402185  603921 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:22.402473  603921 cache.go:107] acquiring lock: {Name:mk8f79409d2ca53ad062fcf0126f6980a6193bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.402632  603921 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:22.402788  603921 cache.go:107] acquiring lock: {Name:mk4ff965cf9ab0943f63cb9d5079b89d443629ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.402897  603921 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:22.403057  603921 cache.go:107] acquiring lock: {Name:mk2037397f0606151b65f1037a4650bdb91f57be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.403186  603921 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:22.403350  603921 cache.go:107] acquiring lock: {Name:mkcce925699bd9689e329c60f570e109b24fe773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.403414  603921 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1213 11:52:22.403426  603921 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 80.443µs
	I1213 11:52:22.403434  603921 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1213 11:52:22.403457  603921 cache.go:107] acquiring lock: {Name:mk7409e8a480c483310652cd8f23d5f9940a03a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.403493  603921 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1213 11:52:22.403502  603921 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 54.286µs
	I1213 11:52:22.403549  603921 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1213 11:52:22.405169  603921 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:22.405591  603921 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:22.406004  603921 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:22.406392  603921 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:22.406763  603921 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:22.423280  603921 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:52:22.423306  603921 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:52:22.423321  603921 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:52:22.423351  603921 start.go:360] acquireMachinesLock for no-preload-307409: {Name:mk5b591d9d6f446a65ecf56605831e84fbfd4c88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:22.423450  603921 start.go:364] duration metric: took 84.382µs to acquireMachinesLock for "no-preload-307409"
	I1213 11:52:22.423480  603921 start.go:93] Provisioning new machine with config: &{Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:52:22.423661  603921 start.go:125] createHost starting for "" (driver="docker")
	I1213 11:52:22.429079  603921 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 11:52:22.429336  603921 start.go:159] libmachine.API.Create for "no-preload-307409" (driver="docker")
	I1213 11:52:22.429376  603921 client.go:173] LocalClient.Create starting
	I1213 11:52:22.429452  603921 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem
	I1213 11:52:22.429493  603921 main.go:143] libmachine: Decoding PEM data...
	I1213 11:52:22.429513  603921 main.go:143] libmachine: Parsing certificate...
	I1213 11:52:22.429576  603921 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem
	I1213 11:52:22.429646  603921 main.go:143] libmachine: Decoding PEM data...
	I1213 11:52:22.429666  603921 main.go:143] libmachine: Parsing certificate...
	I1213 11:52:22.430121  603921 cli_runner.go:164] Run: docker network inspect no-preload-307409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 11:52:22.448911  603921 cli_runner.go:211] docker network inspect no-preload-307409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 11:52:22.448997  603921 network_create.go:284] running [docker network inspect no-preload-307409] to gather additional debugging logs...
	I1213 11:52:22.449017  603921 cli_runner.go:164] Run: docker network inspect no-preload-307409
	W1213 11:52:22.468248  603921 cli_runner.go:211] docker network inspect no-preload-307409 returned with exit code 1
	I1213 11:52:22.468284  603921 network_create.go:287] error running [docker network inspect no-preload-307409]: docker network inspect no-preload-307409: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-307409 not found
	I1213 11:52:22.468303  603921 network_create.go:289] output of [docker network inspect no-preload-307409]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-307409 not found
	
	** /stderr **
	I1213 11:52:22.468404  603921 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:52:22.485064  603921 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0545902499c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:32:4c:cb:8d:7b} reservation:<nil>}
	I1213 11:52:22.485424  603921 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-de5fe2fbe3b8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:54:47:7f:e7:3a} reservation:<nil>}
	I1213 11:52:22.485663  603921 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7c96683190e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:0a:60:46:c5:4a} reservation:<nil>}
	I1213 11:52:22.485957  603921 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5b063c432202 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:6c:83:b3:7b:3a} reservation:<nil>}
	I1213 11:52:22.486426  603921 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bb83e0}
	I1213 11:52:22.486448  603921 network_create.go:124] attempt to create docker network no-preload-307409 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 11:52:22.486504  603921 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-307409 no-preload-307409
	I1213 11:52:22.561619  603921 network_create.go:108] docker network no-preload-307409 192.168.85.0/24 created
	I1213 11:52:22.561649  603921 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-307409" container
	I1213 11:52:22.561735  603921 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 11:52:22.577243  603921 cli_runner.go:164] Run: docker volume create no-preload-307409 --label name.minikube.sigs.k8s.io=no-preload-307409 --label created_by.minikube.sigs.k8s.io=true
	I1213 11:52:22.597274  603921 oci.go:103] Successfully created a docker volume no-preload-307409
	I1213 11:52:22.597374  603921 cli_runner.go:164] Run: docker run --rm --name no-preload-307409-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-307409 --entrypoint /usr/bin/test -v no-preload-307409:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 11:52:22.724954  603921 cache.go:162] opening:  /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1213 11:52:22.752376  603921 cache.go:162] opening:  /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1213 11:52:22.778070  603921 cache.go:162] opening:  /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1213 11:52:22.797264  603921 cache.go:162] opening:  /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1213 11:52:22.805390  603921 cache.go:162] opening:  /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1213 11:52:23.209223  603921 cache.go:157] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1213 11:52:23.209301  603921 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 806.245475ms
	I1213 11:52:23.209330  603921 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1213 11:52:23.266936  603921 oci.go:107] Successfully prepared a docker volume no-preload-307409
	I1213 11:52:23.266994  603921 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	W1213 11:52:23.267122  603921 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 11:52:23.267237  603921 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 11:52:23.342732  603921 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-307409 --name no-preload-307409 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-307409 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-307409 --network no-preload-307409 --ip 192.168.85.2 --volume no-preload-307409:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 11:52:23.695331  603921 cache.go:157] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1213 11:52:23.695405  603921 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.2929342s
	I1213 11:52:23.695435  603921 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 11:52:23.714188  603921 cache.go:157] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1213 11:52:23.714266  603921 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.312276464s
	I1213 11:52:23.714295  603921 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 11:52:23.746751  603921 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Running}}
	I1213 11:52:23.749641  603921 cache.go:157] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1213 11:52:23.749678  603921 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.346893086s
	I1213 11:52:23.749691  603921 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1213 11:52:23.778616  603921 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 11:52:23.802046  603921 cli_runner.go:164] Run: docker exec no-preload-307409 stat /var/lib/dpkg/alternatives/iptables
	I1213 11:52:23.818032  603921 cache.go:157] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1213 11:52:23.818058  603921 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.417329777s
	I1213 11:52:23.818070  603921 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 11:52:23.818085  603921 cache.go:87] Successfully saved all images to host disk.
	I1213 11:52:23.869927  603921 oci.go:144] the created container "no-preload-307409" has a running status.
	I1213 11:52:23.869977  603921 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa...
	I1213 11:52:23.990936  603921 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 11:52:24.020412  603921 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 11:52:24.046398  603921 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 11:52:24.046421  603921 kic_runner.go:114] Args: [docker exec --privileged no-preload-307409 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 11:52:24.114724  603921 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 11:52:24.145665  603921 machine.go:94] provisionDockerMachine start ...
	I1213 11:52:24.145765  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:24.178680  603921 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:24.179021  603921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1213 11:52:24.179031  603921 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:52:24.179772  603921 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 11:52:27.331003  603921 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-307409
	
	I1213 11:52:27.331028  603921 ubuntu.go:182] provisioning hostname "no-preload-307409"
	I1213 11:52:27.331091  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:27.350635  603921 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:27.351104  603921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1213 11:52:27.351127  603921 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-307409 && echo "no-preload-307409" | sudo tee /etc/hostname
	I1213 11:52:27.517546  603921 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-307409
	
	I1213 11:52:27.517640  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:27.537725  603921 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:27.538047  603921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1213 11:52:27.538069  603921 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-307409' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-307409/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-307409' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:52:27.687673  603921 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:52:27.687762  603921 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 11:52:27.687826  603921 ubuntu.go:190] setting up certificates
	I1213 11:52:27.687859  603921 provision.go:84] configureAuth start
	I1213 11:52:27.687988  603921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 11:52:27.704465  603921 provision.go:143] copyHostCerts
	I1213 11:52:27.704533  603921 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 11:52:27.704542  603921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 11:52:27.704618  603921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 11:52:27.704711  603921 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 11:52:27.704717  603921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 11:52:27.704742  603921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 11:52:27.704793  603921 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 11:52:27.704798  603921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 11:52:27.704821  603921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 11:52:27.704870  603921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.no-preload-307409 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-307409]
	I1213 11:52:27.799233  603921 provision.go:177] copyRemoteCerts
	I1213 11:52:27.799303  603921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:52:27.799354  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:27.816072  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 11:52:27.919366  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:52:27.939339  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:52:27.957398  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:52:27.977029  603921 provision.go:87] duration metric: took 289.128062ms to configureAuth
	I1213 11:52:27.977059  603921 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:52:27.977329  603921 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:52:27.977459  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:27.994988  603921 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:27.995311  603921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1213 11:52:27.995346  603921 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:52:28.387156  603921 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:52:28.387181  603921 machine.go:97] duration metric: took 4.241492519s to provisionDockerMachine
	I1213 11:52:28.387193  603921 client.go:176] duration metric: took 5.957805202s to LocalClient.Create
	I1213 11:52:28.387207  603921 start.go:167] duration metric: took 5.957873469s to libmachine.API.Create "no-preload-307409"
	I1213 11:52:28.387215  603921 start.go:293] postStartSetup for "no-preload-307409" (driver="docker")
	I1213 11:52:28.387226  603921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:52:28.387291  603921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:52:28.387336  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:28.404972  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 11:52:28.515880  603921 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:52:28.519219  603921 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:52:28.519251  603921 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:52:28.519263  603921 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 11:52:28.519320  603921 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 11:52:28.519410  603921 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 11:52:28.519562  603921 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:52:28.526963  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:52:28.545859  603921 start.go:296] duration metric: took 158.63039ms for postStartSetup
	I1213 11:52:28.546269  603921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 11:52:28.571235  603921 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/config.json ...
	I1213 11:52:28.571559  603921 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:52:28.571611  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:28.589707  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 11:52:28.696545  603921 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:52:28.701185  603921 start.go:128] duration metric: took 6.27750586s to createHost
	I1213 11:52:28.701209  603921 start.go:83] releasing machines lock for "no-preload-307409", held for 6.27775003s
	I1213 11:52:28.701287  603921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 11:52:28.718595  603921 ssh_runner.go:195] Run: cat /version.json
	I1213 11:52:28.718648  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:28.718908  603921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:52:28.718966  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:28.745537  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 11:52:28.751810  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 11:52:28.847455  603921 ssh_runner.go:195] Run: systemctl --version
	I1213 11:52:28.969867  603921 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:52:29.010183  603921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:52:29.014670  603921 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:52:29.014799  603921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:52:29.046386  603921 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 11:52:29.046411  603921 start.go:496] detecting cgroup driver to use...
	I1213 11:52:29.046444  603921 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:52:29.046493  603921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:52:29.064822  603921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:52:29.078520  603921 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:52:29.078608  603921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:52:29.096990  603921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:52:29.116180  603921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:52:29.242070  603921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:52:29.378676  603921 docker.go:234] disabling docker service ...
	I1213 11:52:29.378760  603921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:52:29.401781  603921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:52:29.417362  603921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:52:29.558549  603921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:52:29.695156  603921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:52:29.709160  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:52:29.724923  603921 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 11:52:29.725028  603921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:29.733811  603921 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:52:29.733884  603921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:29.742902  603921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:29.752357  603921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:29.761431  603921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:52:29.770783  603921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:29.779375  603921 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:29.793009  603921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:29.802451  603921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:52:29.811164  603921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:52:29.818609  603921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:52:29.942303  603921 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:52:30.130461  603921 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:52:30.130567  603921 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:52:30.135067  603921 start.go:564] Will wait 60s for crictl version
	I1213 11:52:30.135148  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.139648  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:52:30.167916  603921 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:52:30.168057  603921 ssh_runner.go:195] Run: crio --version
	I1213 11:52:30.201235  603921 ssh_runner.go:195] Run: crio --version
	I1213 11:52:30.240166  603921 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 11:52:30.243017  603921 cli_runner.go:164] Run: docker network inspect no-preload-307409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:52:30.259990  603921 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 11:52:30.264096  603921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:52:30.274510  603921 kubeadm.go:884] updating cluster {Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:52:30.274625  603921 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:30.274673  603921 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:52:30.299868  603921 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1213 11:52:30.299895  603921 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1213 11:52:30.299939  603921 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:52:30.300144  603921 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:30.300228  603921 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:30.300318  603921 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:30.300422  603921 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:30.300512  603921 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1213 11:52:30.300599  603921 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1213 11:52:30.300694  603921 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:30.301694  603921 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:30.301935  603921 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:30.302103  603921 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1213 11:52:30.302258  603921 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:52:30.302557  603921 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1213 11:52:30.302733  603921 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:30.302971  603921 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:30.303142  603921 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:30.527419  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:30.555499  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1213 11:52:30.570567  603921 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1213 11:52:30.570662  603921 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:30.570728  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.584640  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:30.591270  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:30.595713  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1213 11:52:30.616170  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:30.619381  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:30.622807  603921 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1213 11:52:30.622860  603921 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1213 11:52:30.622946  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.623055  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:30.710823  603921 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1213 11:52:30.710983  603921 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:30.711082  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.710930  603921 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1213 11:52:30.711200  603921 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:30.711241  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.736060  603921 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1213 11:52:30.736163  603921 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1213 11:52:30.736234  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.740106  603921 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1213 11:52:30.740189  603921 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:30.740262  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.748359  603921 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1213 11:52:30.748463  603921 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:30.748511  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:30.748555  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:30.748628  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 11:52:30.748683  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:30.748738  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:30.748788  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 11:52:30.748845  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:30.856302  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:30.856487  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:30.856521  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:30.856573  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 11:52:30.856627  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 11:52:30.856653  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1213 11:52:30.856693  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:30.971700  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1213 11:52:30.971783  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1213 11:52:30.971816  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1213 11:52:30.971845  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1213 11:52:30.971874  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1213 11:52:30.971903  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:30.971935  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1213 11:52:30.972193  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1213 11:52:31.074055  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1213 11:52:31.074094  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1213 11:52:31.074184  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1213 11:52:31.074205  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1213 11:52:31.074277  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1213 11:52:31.074302  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1213 11:52:31.074328  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1213 11:52:31.074347  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1213 11:52:31.074371  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1213 11:52:31.074278  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1213 11:52:31.074412  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1213 11:52:31.074438  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1213 11:52:31.074484  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1213 11:52:31.112864  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1213 11:52:31.112902  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1213 11:52:31.112967  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1213 11:52:31.112980  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1213 11:52:31.123045  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1213 11:52:31.123083  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1213 11:52:31.123160  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1213 11:52:31.123177  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	W1213 11:52:31.139055  603921 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1213 11:52:31.139150  603921 retry.go:31] will retry after 147.135859ms: ssh: rejected: connect failed (open failed)
	I1213 11:52:31.139250  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1213 11:52:31.139295  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1213 11:52:31.139384  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:31.139650  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1213 11:52:31.139777  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1213 11:52:31.139869  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:31.188122  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 11:52:31.202626  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 11:52:31.288550  603921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 11:52:31.364969  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1213 11:52:31.365219  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1213 11:52:31.396116  603921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	W1213 11:52:31.547454  603921 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1213 11:52:31.547789  603921 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:52:31.674247  603921 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1213 11:52:31.674310  603921 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:52:31.674373  603921 ssh_runner.go:195] Run: which crictl
	I1213 11:52:31.683142  603921 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1213 11:52:31.683265  603921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1213 11:52:31.693453  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:52:32.082334  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1213 11:52:32.082370  603921 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1213 11:52:32.082422  603921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1213 11:52:32.082516  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:52:34.308510  603921 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.225962736s)
	I1213 11:52:34.308588  603921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 11:52:34.308603  603921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (2.22615852s)
	I1213 11:52:34.308620  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1213 11:52:34.308638  603921 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1213 11:52:34.308676  603921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1213 11:52:35.518931  603921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.210235098s)
	I1213 11:52:35.518963  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1213 11:52:35.518985  603921 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1213 11:52:35.519031  603921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1213 11:52:35.519092  603921 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.210494086s)
	I1213 11:52:35.519120  603921 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1213 11:52:35.519184  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1213 11:52:37.072889  603921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.553831938s)
	I1213 11:52:37.072913  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1213 11:52:37.072933  603921 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1213 11:52:37.072981  603921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1213 11:52:37.073078  603921 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.553877116s)
	I1213 11:52:37.073095  603921 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1213 11:52:37.073111  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	
	
	==> CRI-O <==
	Dec 13 11:52:15 embed-certs-326948 crio[653]: time="2025-12-13T11:52:15.866576142Z" level=info msg="Removed container 41ecc256171f3cb32f56cc3f1444214820639a6479377db1abbe69ce0b3643d2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8mjg/dashboard-metrics-scraper" id=202bad14-9551-47b2-9861-1061d83bd0f5 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 11:52:17 embed-certs-326948 conmon[1156]: conmon 4b31b7b14f7ea7dae016 <ninfo>: container 1158 exited with status 1
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.867854973Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2518d63f-4e30-4823-a6ab-20bf2a36bcb1 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.877355453Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9c6f0262-10a7-4109-90c7-a1e9aea959da name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.879143815Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=c6a0893b-d5d2-47ac-b1cd-0a3ab630eabc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.879341831Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.88821788Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.888417356Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b1e671f204c660706266d89cab673326d386f074d90d85cac190ac9b11e8adc0/merged/etc/passwd: no such file or directory"
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.88844015Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b1e671f204c660706266d89cab673326d386f074d90d85cac190ac9b11e8adc0/merged/etc/group: no such file or directory"
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.897649469Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.921671719Z" level=info msg="Created container 520775b2835f58d12a03e4f13fd8b850209d14f35df495d20095ce075d91a77d: kube-system/storage-provisioner/storage-provisioner" id=c6a0893b-d5d2-47ac-b1cd-0a3ab630eabc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.922938297Z" level=info msg="Starting container: 520775b2835f58d12a03e4f13fd8b850209d14f35df495d20095ce075d91a77d" id=4b28e0e5-d071-495e-a14e-33ebc386c960 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 11:52:17 embed-certs-326948 crio[653]: time="2025-12-13T11:52:17.926858352Z" level=info msg="Started container" PID=1642 containerID=520775b2835f58d12a03e4f13fd8b850209d14f35df495d20095ce075d91a77d description=kube-system/storage-provisioner/storage-provisioner id=4b28e0e5-d071-495e-a14e-33ebc386c960 name=/runtime.v1.RuntimeService/StartContainer sandboxID=439671e7fbf85c257ab0e7f0bd0330beccbaa1e43eb6b758797b2e918363e262
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.440636973Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.446263749Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.446312627Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.446576572Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.449886587Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.449921418Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.449946534Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.454443767Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.454588991Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.454663018Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.460750754Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 11:52:27 embed-certs-326948 crio[653]: time="2025-12-13T11:52:27.460909369Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	520775b2835f5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago      Running             storage-provisioner         2                   439671e7fbf85       storage-provisioner                          kube-system
	b935dfd5f4ab0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   028bd2ab590f7       dashboard-metrics-scraper-6ffb444bf9-x8mjg   kubernetes-dashboard
	28fd92fb28295       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   43 seconds ago      Running             kubernetes-dashboard        0                   862ff0cb1bc21       kubernetes-dashboard-855c9754f9-s4wkb        kubernetes-dashboard
	04543c0f719e7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago      Running             coredns                     1                   31da83bf9c0cb       coredns-66bc5c9577-459p2                     kube-system
	ccccb8d28996f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   de188ff0a5eb9       busybox                                      default
	4b31b7b14f7ea       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   439671e7fbf85       storage-provisioner                          kube-system
	5d10a35acf070       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                           52 seconds ago      Running             kube-proxy                  1                   d3e4f4dfe4a32       kube-proxy-5thrz                             kube-system
	793a7623a27a1       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           52 seconds ago      Running             kindnet-cni                 1                   77c86af7b1a42       kindnet-q82mh                                kube-system
	2f0d882fac60f       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                           57 seconds ago      Running             kube-scheduler              1                   1ff0d9e73d215       kube-scheduler-embed-certs-326948            kube-system
	6dd44e49c8819       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                           57 seconds ago      Running             etcd                        1                   0a301115e9714       etcd-embed-certs-326948                      kube-system
	5fa45fd0696ef       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                           57 seconds ago      Running             kube-apiserver              1                   fe63963346016       kube-apiserver-embed-certs-326948            kube-system
	cb833c8e8af66       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                           57 seconds ago      Running             kube-controller-manager     1                   e65348fb8c69a       kube-controller-manager-embed-certs-326948   kube-system
	
	
	==> coredns [04543c0f719e7d85c63aa76e5c99b4b6f1b6ec0e2da337c46f1d0d11c624f0ed] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45231 - 48121 "HINFO IN 1039032509713243738.9011842995951996538. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.050725592s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-326948
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-326948
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b
	                    minikube.k8s.io/name=embed-certs-326948
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T11_50_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 11:50:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-326948
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 11:52:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 11:52:16 +0000   Sat, 13 Dec 2025 11:50:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 11:52:16 +0000   Sat, 13 Dec 2025 11:50:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 11:52:16 +0000   Sat, 13 Dec 2025 11:50:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 11:52:16 +0000   Sat, 13 Dec 2025 11:51:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-326948
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 78f85184c267cd52312ad0096937f858
	  System UUID:                649dcd43-7d72-42de-9a4b-6b3667428bbb
	  Boot ID:                    9bd24839-35d9-4392-a0e0-b2e0b9823eaa
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-459p2                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     109s
	  kube-system                 etcd-embed-certs-326948                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         114s
	  kube-system                 kindnet-q82mh                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-embed-certs-326948             250m (12%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-embed-certs-326948    200m (10%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-5thrz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-embed-certs-326948             100m (5%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-x8mjg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-s4wkb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 108s                 kube-proxy       
	  Normal   Starting                 51s                  kube-proxy       
	  Warning  CgroupV1                 2m3s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node embed-certs-326948 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node embed-certs-326948 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m3s (x8 over 2m3s)  kubelet          Node embed-certs-326948 status is now: NodeHasSufficientPID
	  Normal   Starting                 115s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 115s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  114s                 kubelet          Node embed-certs-326948 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    114s                 kubelet          Node embed-certs-326948 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     114s                 kubelet          Node embed-certs-326948 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           110s                 node-controller  Node embed-certs-326948 event: Registered Node embed-certs-326948 in Controller
	  Normal   NodeReady                96s                  kubelet          Node embed-certs-326948 status is now: NodeReady
	  Normal   Starting                 59s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node embed-certs-326948 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node embed-certs-326948 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node embed-certs-326948 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                  node-controller  Node embed-certs-326948 event: Registered Node embed-certs-326948 in Controller
	
	
	==> dmesg <==
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	[Dec13 11:46] overlayfs: idmapped layers are currently not supported
	[ +24.639766] overlayfs: idmapped layers are currently not supported
	[ +18.732422] overlayfs: idmapped layers are currently not supported
	[Dec13 11:47] overlayfs: idmapped layers are currently not supported
	[Dec13 11:48] overlayfs: idmapped layers are currently not supported
	[Dec13 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.618483] overlayfs: idmapped layers are currently not supported
	[Dec13 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.749488] overlayfs: idmapped layers are currently not supported
	[Dec13 11:52] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6dd44e49c88192d0751bf92478d724a6b1aba48c24981c5597a801740be36751] <==
	{"level":"warn","ts":"2025-12-13T11:51:43.962517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.023766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.081775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.108830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.147729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.183736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.224637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.264074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.305916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.344514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.395682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.468474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.511625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.543425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.584273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.618688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.656731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.699842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.738086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.773057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.815040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.872144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.888256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:44.930177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T11:51:45.071595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37482","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:52:39 up  3:35,  0 user,  load average: 2.87, 2.73, 2.30
	Linux embed-certs-326948 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [793a7623a27a1583339563d46f86b94988bcd8d01c9ee6c3fc5ac20c8cc17b18] <==
	I1213 11:51:47.247349       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 11:51:47.324697       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1213 11:51:47.324846       1 main.go:148] setting mtu 1500 for CNI 
	I1213 11:51:47.324858       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 11:51:47.324869       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T11:51:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 11:51:47.520241       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 11:51:47.520265       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 11:51:47.520274       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 11:51:47.520413       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1213 11:52:17.523500       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1213 11:52:17.523678       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1213 11:52:17.523747       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1213 11:52:17.523758       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1213 11:52:18.920519       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 11:52:18.920705       1 metrics.go:72] Registering metrics
	I1213 11:52:18.920807       1 controller.go:711] "Syncing nftables rules"
	I1213 11:52:27.440222       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 11:52:27.440352       1 main.go:301] handling current node
	I1213 11:52:37.448158       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 11:52:37.448256       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5fa45fd0696ef89615d1d81b1bf2769d38c87713975e43422c105cb0d61cfdaa] <==
	I1213 11:51:46.165681       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 11:51:46.198994       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 11:51:46.225642       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 11:51:46.225841       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 11:51:46.229822       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 11:51:46.229866       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 11:51:46.239867       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 11:51:46.239900       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 11:51:46.240709       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 11:51:46.240828       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1213 11:51:46.271441       1 cache.go:39] Caches are synced for autoregister controller
	I1213 11:51:46.294304       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1213 11:51:46.294421       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 11:51:46.294510       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1213 11:51:46.657235       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 11:51:46.936122       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 11:51:47.352952       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 11:51:47.478114       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 11:51:47.526295       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 11:51:47.554180       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 11:51:47.656610       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.32.186"}
	I1213 11:51:47.674491       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.238.29"}
	I1213 11:51:49.400724       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 11:51:49.784264       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 11:51:49.999277       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [cb833c8e8af6645f23e9e2891cd88798a8d4211065330a18962b7d19db79c7ba] <==
	I1213 11:51:49.415063       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 11:51:49.415070       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 11:51:49.422491       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1213 11:51:49.422600       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1213 11:51:49.422711       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-326948"
	I1213 11:51:49.422763       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1213 11:51:49.423242       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 11:51:49.425313       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1213 11:51:49.427670       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 11:51:49.427828       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1213 11:51:49.427855       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 11:51:49.427951       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 11:51:49.428118       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 11:51:49.428575       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 11:51:49.429471       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1213 11:51:49.433113       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 11:51:49.440217       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1213 11:51:49.440377       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 11:51:49.440469       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 11:51:49.440521       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1213 11:51:49.440591       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 11:51:49.449849       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 11:51:49.452824       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 11:51:49.455059       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 11:51:49.461991       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [5d10a35acf07003859e6f4a92a7647db98e28eaad48faab459dd989da04b1638] <==
	I1213 11:51:47.392717       1 server_linux.go:53] "Using iptables proxy"
	I1213 11:51:47.571245       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 11:51:47.671671       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 11:51:47.672587       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1213 11:51:47.672712       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 11:51:47.741948       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 11:51:47.742087       1 server_linux.go:132] "Using iptables Proxier"
	I1213 11:51:47.746335       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 11:51:47.746704       1 server.go:527] "Version info" version="v1.34.2"
	I1213 11:51:47.746861       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:51:47.748352       1 config.go:200] "Starting service config controller"
	I1213 11:51:47.748417       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 11:51:47.748460       1 config.go:106] "Starting endpoint slice config controller"
	I1213 11:51:47.748488       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 11:51:47.748540       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 11:51:47.748569       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 11:51:47.749212       1 config.go:309] "Starting node config controller"
	I1213 11:51:47.751702       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 11:51:47.751796       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 11:51:47.849715       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 11:51:47.850089       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 11:51:47.851708       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2f0d882fac60f1616055bed06c1f6058d2f4d9771c371fa9e130d01762278744] <==
	I1213 11:51:44.127157       1 serving.go:386] Generated self-signed cert in-memory
	W1213 11:51:46.071992       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 11:51:46.072022       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 11:51:46.072041       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 11:51:46.072049       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 11:51:46.224603       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 11:51:46.225746       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 11:51:46.232276       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:51:46.235094       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 11:51:46.236031       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 11:51:46.237892       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 11:51:46.341166       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 11:51:49 embed-certs-326948 kubelet[780]: I1213 11:51:49.000932     780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 13 11:51:50 embed-certs-326948 kubelet[780]: I1213 11:51:50.059705     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/97973cdf-f52e-4441-a054-20360ea34720-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-s4wkb\" (UID: \"97973cdf-f52e-4441-a054-20360ea34720\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s4wkb"
	Dec 13 11:51:50 embed-certs-326948 kubelet[780]: I1213 11:51:50.059771     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d18eafb1-f364-4420-88c9-b4b573fb4f27-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-x8mjg\" (UID: \"d18eafb1-f364-4420-88c9-b4b573fb4f27\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8mjg"
	Dec 13 11:51:50 embed-certs-326948 kubelet[780]: I1213 11:51:50.059800     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8ckx\" (UniqueName: \"kubernetes.io/projected/97973cdf-f52e-4441-a054-20360ea34720-kube-api-access-h8ckx\") pod \"kubernetes-dashboard-855c9754f9-s4wkb\" (UID: \"97973cdf-f52e-4441-a054-20360ea34720\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s4wkb"
	Dec 13 11:51:50 embed-certs-326948 kubelet[780]: I1213 11:51:50.059825     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjj6r\" (UniqueName: \"kubernetes.io/projected/d18eafb1-f364-4420-88c9-b4b573fb4f27-kube-api-access-rjj6r\") pod \"dashboard-metrics-scraper-6ffb444bf9-x8mjg\" (UID: \"d18eafb1-f364-4420-88c9-b4b573fb4f27\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8mjg"
	Dec 13 11:51:50 embed-certs-326948 kubelet[780]: W1213 11:51:50.341634     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4fffdfd58e00004a7eeb7aee6e0d0bb1aaa943993b1efeddabb7a300070b2f14/crio-028bd2ab590f73966bb502ea2da6090b9d2cceca6394b199bcbfd330569179e4 WatchSource:0}: Error finding container 028bd2ab590f73966bb502ea2da6090b9d2cceca6394b199bcbfd330569179e4: Status 404 returned error can't find the container with id 028bd2ab590f73966bb502ea2da6090b9d2cceca6394b199bcbfd330569179e4
	Dec 13 11:51:58 embed-certs-326948 kubelet[780]: I1213 11:51:58.214944     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-s4wkb" podStartSLOduration=4.340572183 podStartE2EDuration="9.213770715s" podCreationTimestamp="2025-12-13 11:51:49 +0000 UTC" firstStartedPulling="2025-12-13 11:51:50.306730234 +0000 UTC m=+9.850424787" lastFinishedPulling="2025-12-13 11:51:55.179928684 +0000 UTC m=+14.723623319" observedRunningTime="2025-12-13 11:51:55.836585148 +0000 UTC m=+15.380279701" watchObservedRunningTime="2025-12-13 11:51:58.213770715 +0000 UTC m=+17.757465276"
	Dec 13 11:51:59 embed-certs-326948 kubelet[780]: I1213 11:51:59.801657     780 scope.go:117] "RemoveContainer" containerID="71542f304a4d7e0c81c872c48e086c4fcbc59f365730cc4a878c6dcfaf95d68f"
	Dec 13 11:52:00 embed-certs-326948 kubelet[780]: I1213 11:52:00.804658     780 scope.go:117] "RemoveContainer" containerID="41ecc256171f3cb32f56cc3f1444214820639a6479377db1abbe69ce0b3643d2"
	Dec 13 11:52:00 embed-certs-326948 kubelet[780]: E1213 11:52:00.804794     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8mjg_kubernetes-dashboard(d18eafb1-f364-4420-88c9-b4b573fb4f27)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8mjg" podUID="d18eafb1-f364-4420-88c9-b4b573fb4f27"
	Dec 13 11:52:00 embed-certs-326948 kubelet[780]: I1213 11:52:00.807488     780 scope.go:117] "RemoveContainer" containerID="71542f304a4d7e0c81c872c48e086c4fcbc59f365730cc4a878c6dcfaf95d68f"
	Dec 13 11:52:01 embed-certs-326948 kubelet[780]: I1213 11:52:01.808982     780 scope.go:117] "RemoveContainer" containerID="41ecc256171f3cb32f56cc3f1444214820639a6479377db1abbe69ce0b3643d2"
	Dec 13 11:52:01 embed-certs-326948 kubelet[780]: E1213 11:52:01.809681     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8mjg_kubernetes-dashboard(d18eafb1-f364-4420-88c9-b4b573fb4f27)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8mjg" podUID="d18eafb1-f364-4420-88c9-b4b573fb4f27"
	Dec 13 11:52:03 embed-certs-326948 kubelet[780]: I1213 11:52:03.473392     780 scope.go:117] "RemoveContainer" containerID="41ecc256171f3cb32f56cc3f1444214820639a6479377db1abbe69ce0b3643d2"
	Dec 13 11:52:03 embed-certs-326948 kubelet[780]: E1213 11:52:03.473580     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8mjg_kubernetes-dashboard(d18eafb1-f364-4420-88c9-b4b573fb4f27)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8mjg" podUID="d18eafb1-f364-4420-88c9-b4b573fb4f27"
	Dec 13 11:52:15 embed-certs-326948 kubelet[780]: I1213 11:52:15.682184     780 scope.go:117] "RemoveContainer" containerID="41ecc256171f3cb32f56cc3f1444214820639a6479377db1abbe69ce0b3643d2"
	Dec 13 11:52:15 embed-certs-326948 kubelet[780]: I1213 11:52:15.844623     780 scope.go:117] "RemoveContainer" containerID="41ecc256171f3cb32f56cc3f1444214820639a6479377db1abbe69ce0b3643d2"
	Dec 13 11:52:15 embed-certs-326948 kubelet[780]: I1213 11:52:15.845323     780 scope.go:117] "RemoveContainer" containerID="b935dfd5f4ab0963b9e8e5cdedc0587e560b4b7330d8a4fc562de7886295f8c9"
	Dec 13 11:52:15 embed-certs-326948 kubelet[780]: E1213 11:52:15.845671     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8mjg_kubernetes-dashboard(d18eafb1-f364-4420-88c9-b4b573fb4f27)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8mjg" podUID="d18eafb1-f364-4420-88c9-b4b573fb4f27"
	Dec 13 11:52:17 embed-certs-326948 kubelet[780]: I1213 11:52:17.855404     780 scope.go:117] "RemoveContainer" containerID="4b31b7b14f7ea7dae0165197cb4dcc5a91e11968d8fa8b418ffd9a16792f2d11"
	Dec 13 11:52:23 embed-certs-326948 kubelet[780]: I1213 11:52:23.473702     780 scope.go:117] "RemoveContainer" containerID="b935dfd5f4ab0963b9e8e5cdedc0587e560b4b7330d8a4fc562de7886295f8c9"
	Dec 13 11:52:23 embed-certs-326948 kubelet[780]: E1213 11:52:23.473926     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x8mjg_kubernetes-dashboard(d18eafb1-f364-4420-88c9-b4b573fb4f27)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x8mjg" podUID="d18eafb1-f364-4420-88c9-b4b573fb4f27"
	Dec 13 11:52:33 embed-certs-326948 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 11:52:33 embed-certs-326948 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 11:52:33 embed-certs-326948 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [28fd92fb28295293673648bc7a3d13d3ec24a5f53c88319b4e3d85812be1d0da] <==
	2025/12/13 11:51:55 Using namespace: kubernetes-dashboard
	2025/12/13 11:51:55 Using in-cluster config to connect to apiserver
	2025/12/13 11:51:55 Using secret token for csrf signing
	2025/12/13 11:51:55 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 11:51:55 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 11:51:55 Successful initial request to the apiserver, version: v1.34.2
	2025/12/13 11:51:55 Generating JWE encryption key
	2025/12/13 11:51:55 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 11:51:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 11:51:56 Initializing JWE encryption key from synchronized object
	2025/12/13 11:51:56 Creating in-cluster Sidecar client
	2025/12/13 11:51:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 11:51:56 Serving insecurely on HTTP port: 9090
	2025/12/13 11:52:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 11:51:55 Starting overwatch
	
	
	==> storage-provisioner [4b31b7b14f7ea7dae0165197cb4dcc5a91e11968d8fa8b418ffd9a16792f2d11] <==
	I1213 11:51:47.373830       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 11:52:17.377192       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [520775b2835f58d12a03e4f13fd8b850209d14f35df495d20095ce075d91a77d] <==
	I1213 11:52:17.944483       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 11:52:17.977770       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 11:52:17.977923       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 11:52:17.983431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:21.439303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:25.699949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:29.298187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:32.352697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:35.375413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:35.383861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 11:52:35.384096       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 11:52:35.384358       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-326948_9996b2df-3786-4199-82a7-e41e9eb25230!
	I1213 11:52:35.386285       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"23390c62-23fe-4c67-a69c-5011159a5f31", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-326948_9996b2df-3786-4199-82a7-e41e9eb25230 became leader
	W1213 11:52:35.398466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:35.415407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 11:52:35.485083       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-326948_9996b2df-3786-4199-82a7-e41e9eb25230!
	W1213 11:52:37.427077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:37.448768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:39.452009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 11:52:39.459449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-326948 -n embed-certs-326948
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-326948 -n embed-certs-326948: exit status 2 (457.108244ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-326948 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (505.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-800979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1213 11:53:05.574864  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:53:05.581317  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:53:05.592821  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:53:05.614341  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:53:05.655836  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:53:05.737427  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:53:05.899136  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:53:06.220889  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:53:06.863092  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:53:08.144807  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:53:10.706992  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:53:15.829002  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:53:26.070330  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:53:46.551826  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:54:06.639679  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:54:27.513300  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:55:31.006297  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:55:44.682895  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:55:44.689387  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:55:44.700901  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:55:44.722386  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:55:44.763885  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:55:44.845305  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:55:45.008769  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:55:45.330806  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:55:45.973048  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:55:47.254949  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:55:49.434772  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:55:49.816484  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:55:54.938148  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:56:05.180470  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:56:25.661994  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:56:43.536520  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:57:00.470887  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:57:06.623475  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:57:27.930436  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:58:05.575164  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:58:28.545056  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:58:33.276736  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:59:06.639720  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:00:44.682890  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-800979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m24.088403616s)

                                                
                                                
-- stdout --
	* [newest-cni-800979] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-800979" primary control-plane node in "newest-cni-800979" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:52:44.222945  607523 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:52:44.223057  607523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:44.223099  607523 out.go:374] Setting ErrFile to fd 2...
	I1213 11:52:44.223106  607523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:44.223364  607523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:52:44.223812  607523 out.go:368] Setting JSON to false
	I1213 11:52:44.224724  607523 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12917,"bootTime":1765613848,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:52:44.224797  607523 start.go:143] virtualization:  
	I1213 11:52:44.228935  607523 out.go:179] * [newest-cni-800979] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:52:44.232087  607523 notify.go:221] Checking for updates...
	I1213 11:52:44.232862  607523 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:52:44.236046  607523 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:52:44.241086  607523 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:52:44.244482  607523 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:52:44.247343  607523 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:52:44.250267  607523 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:52:44.253709  607523 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:52:44.253853  607523 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:52:44.284666  607523 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:52:44.284774  607523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:52:44.401910  607523 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-13 11:52:44.38729859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:52:44.402031  607523 docker.go:319] overlay module found
	I1213 11:52:44.405585  607523 out.go:179] * Using the docker driver based on user configuration
	I1213 11:52:44.408428  607523 start.go:309] selected driver: docker
	I1213 11:52:44.408454  607523 start.go:927] validating driver "docker" against <nil>
	I1213 11:52:44.408468  607523 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:52:44.409713  607523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:52:44.548406  607523 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-13 11:52:44.53777287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:52:44.548555  607523 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 11:52:44.548581  607523 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 11:52:44.549476  607523 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 11:52:44.552258  607523 out.go:179] * Using Docker driver with root privileges
	I1213 11:52:44.555279  607523 cni.go:84] Creating CNI manager for ""
	I1213 11:52:44.555356  607523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:52:44.555365  607523 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 11:52:44.555448  607523 start.go:353] cluster config:
	{Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:52:44.558889  607523 out.go:179] * Starting "newest-cni-800979" primary control-plane node in "newest-cni-800979" cluster
	I1213 11:52:44.561893  607523 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:52:44.564946  607523 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:52:44.567939  607523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:44.568029  607523 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 11:52:44.568050  607523 cache.go:65] Caching tarball of preloaded images
	I1213 11:52:44.568145  607523 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 11:52:44.568156  607523 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 11:52:44.568295  607523 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json ...
	I1213 11:52:44.568315  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json: {Name:mkca051d0f4222f12ada2e542e9765aa1caaa1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:44.568460  607523 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:52:44.614235  607523 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:52:44.614511  607523 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:52:44.614568  607523 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:52:44.614617  607523 start.go:360] acquireMachinesLock for newest-cni-800979: {Name:mk98646479cdf6b123b7b6024833c6594650d415 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:44.614732  607523 start.go:364] duration metric: took 92.595µs to acquireMachinesLock for "newest-cni-800979"
	I1213 11:52:44.614763  607523 start.go:93] Provisioning new machine with config: &{Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:52:44.614850  607523 start.go:125] createHost starting for "" (driver="docker")
	I1213 11:52:44.618660  607523 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 11:52:44.618986  607523 start.go:159] libmachine.API.Create for "newest-cni-800979" (driver="docker")
	I1213 11:52:44.619024  607523 client.go:173] LocalClient.Create starting
	I1213 11:52:44.619095  607523 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem
	I1213 11:52:44.619134  607523 main.go:143] libmachine: Decoding PEM data...
	I1213 11:52:44.619169  607523 main.go:143] libmachine: Parsing certificate...
	I1213 11:52:44.619234  607523 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem
	I1213 11:52:44.619259  607523 main.go:143] libmachine: Decoding PEM data...
	I1213 11:52:44.619275  607523 main.go:143] libmachine: Parsing certificate...
	I1213 11:52:44.619828  607523 cli_runner.go:164] Run: docker network inspect newest-cni-800979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 11:52:44.681886  607523 cli_runner.go:211] docker network inspect newest-cni-800979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 11:52:44.682019  607523 network_create.go:284] running [docker network inspect newest-cni-800979] to gather additional debugging logs...
	I1213 11:52:44.682044  607523 cli_runner.go:164] Run: docker network inspect newest-cni-800979
	W1213 11:52:44.783263  607523 cli_runner.go:211] docker network inspect newest-cni-800979 returned with exit code 1
	I1213 11:52:44.783303  607523 network_create.go:287] error running [docker network inspect newest-cni-800979]: docker network inspect newest-cni-800979: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-800979 not found
	I1213 11:52:44.783456  607523 network_create.go:289] output of [docker network inspect newest-cni-800979]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-800979 not found
	
	** /stderr **
	I1213 11:52:44.783853  607523 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:52:44.869365  607523 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0545902499c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:32:4c:cb:8d:7b} reservation:<nil>}
	I1213 11:52:44.869936  607523 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-de5fe2fbe3b8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:54:47:7f:e7:3a} reservation:<nil>}
	I1213 11:52:44.870324  607523 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7c96683190e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:0a:60:46:c5:4a} reservation:<nil>}
	I1213 11:52:44.872231  607523 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 11:52:44.872625  607523 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-280e424abad6 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:5e:ad:5b:52:ee:cb} reservation:<nil>}
	I1213 11:52:44.873100  607523 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a0a730}
	I1213 11:52:44.873121  607523 network_create.go:124] attempt to create docker network newest-cni-800979 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1213 11:52:44.873186  607523 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-800979 newest-cni-800979
	I1213 11:52:45.033952  607523 network_create.go:108] docker network newest-cni-800979 192.168.94.0/24 created
	I1213 11:52:45.033989  607523 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-800979" container
	I1213 11:52:45.034089  607523 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 11:52:45.110922  607523 cli_runner.go:164] Run: docker volume create newest-cni-800979 --label name.minikube.sigs.k8s.io=newest-cni-800979 --label created_by.minikube.sigs.k8s.io=true
	I1213 11:52:45.147181  607523 oci.go:103] Successfully created a docker volume newest-cni-800979
	I1213 11:52:45.148756  607523 cli_runner.go:164] Run: docker run --rm --name newest-cni-800979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800979 --entrypoint /usr/bin/test -v newest-cni-800979:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 11:52:46.576150  607523 cli_runner.go:217] Completed: docker run --rm --name newest-cni-800979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800979 --entrypoint /usr/bin/test -v newest-cni-800979:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.427287827s)
	I1213 11:52:46.576182  607523 oci.go:107] Successfully prepared a docker volume newest-cni-800979
	I1213 11:52:46.576222  607523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:46.576231  607523 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 11:52:46.576286  607523 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-800979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 11:52:51.477960  607523 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-800979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.901639858s)
	I1213 11:52:51.478004  607523 kic.go:203] duration metric: took 4.901755297s to extract preloaded images to volume ...
	W1213 11:52:51.478154  607523 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 11:52:51.478257  607523 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 11:52:51.600099  607523 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-800979 --name newest-cni-800979 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800979 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-800979 --network newest-cni-800979 --ip 192.168.94.2 --volume newest-cni-800979:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 11:52:52.003446  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Running}}
	I1213 11:52:52.025630  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 11:52:52.044945  607523 cli_runner.go:164] Run: docker exec newest-cni-800979 stat /var/lib/dpkg/alternatives/iptables
	I1213 11:52:52.103780  607523 oci.go:144] the created container "newest-cni-800979" has a running status.
	I1213 11:52:52.103827  607523 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa...
	I1213 11:52:52.454986  607523 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 11:52:52.499855  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 11:52:52.520167  607523 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 11:52:52.520186  607523 kic_runner.go:114] Args: [docker exec --privileged newest-cni-800979 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 11:52:52.595209  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 11:52:52.616614  607523 machine.go:94] provisionDockerMachine start ...
	I1213 11:52:52.616710  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:52.645695  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:52.646054  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:52.646065  607523 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:52:52.646853  607523 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49104->127.0.0.1:33463: read: connection reset by peer
	I1213 11:52:55.795509  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-800979
	
	I1213 11:52:55.795546  607523 ubuntu.go:182] provisioning hostname "newest-cni-800979"
	I1213 11:52:55.795609  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:55.823768  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:55.824086  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:55.824105  607523 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-800979 && echo "newest-cni-800979" | sudo tee /etc/hostname
	I1213 11:52:55.984531  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-800979
	
	I1213 11:52:55.984627  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:56.004427  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:56.004789  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:56.004806  607523 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-800979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-800979/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-800979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:52:56.155779  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:52:56.155809  607523 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 11:52:56.155840  607523 ubuntu.go:190] setting up certificates
	I1213 11:52:56.155849  607523 provision.go:84] configureAuth start
	I1213 11:52:56.155916  607523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 11:52:56.173051  607523 provision.go:143] copyHostCerts
	I1213 11:52:56.173126  607523 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 11:52:56.173140  607523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 11:52:56.173218  607523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 11:52:56.173314  607523 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 11:52:56.173326  607523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 11:52:56.173354  607523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 11:52:56.173407  607523 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 11:52:56.173416  607523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 11:52:56.173440  607523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 11:52:56.173493  607523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.newest-cni-800979 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-800979]
	I1213 11:52:56.495741  607523 provision.go:177] copyRemoteCerts
	I1213 11:52:56.495819  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:52:56.495860  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:56.513776  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:56.623272  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:52:56.640893  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:52:56.658251  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:52:56.675898  607523 provision.go:87] duration metric: took 520.035144ms to configureAuth
	I1213 11:52:56.675924  607523 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:52:56.676119  607523 config.go:182] Loaded profile config "newest-cni-800979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:52:56.676229  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:56.693573  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:56.693885  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:56.693913  607523 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:52:57.000433  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:52:57.000459  607523 machine.go:97] duration metric: took 4.383824523s to provisionDockerMachine
	I1213 11:52:57.000471  607523 client.go:176] duration metric: took 12.381437402s to LocalClient.Create
	I1213 11:52:57.000485  607523 start.go:167] duration metric: took 12.381502329s to libmachine.API.Create "newest-cni-800979"
	I1213 11:52:57.000493  607523 start.go:293] postStartSetup for "newest-cni-800979" (driver="docker")
	I1213 11:52:57.000506  607523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:52:57.000573  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:52:57.000635  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.019654  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.123498  607523 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:52:57.126887  607523 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:52:57.126915  607523 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:52:57.126942  607523 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 11:52:57.127003  607523 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 11:52:57.127090  607523 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 11:52:57.127193  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:52:57.134628  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:52:57.153601  607523 start.go:296] duration metric: took 153.093637ms for postStartSetup
	I1213 11:52:57.154022  607523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 11:52:57.174170  607523 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json ...
	I1213 11:52:57.174465  607523 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:52:57.174516  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.191003  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.300652  607523 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:52:57.305941  607523 start.go:128] duration metric: took 12.691075107s to createHost
	I1213 11:52:57.305969  607523 start.go:83] releasing machines lock for "newest-cni-800979", held for 12.691222882s
	I1213 11:52:57.306067  607523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 11:52:57.324383  607523 ssh_runner.go:195] Run: cat /version.json
	I1213 11:52:57.324411  607523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:52:57.324436  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.324473  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.349379  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.349454  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.540188  607523 ssh_runner.go:195] Run: systemctl --version
	I1213 11:52:57.546743  607523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:52:57.581981  607523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:52:57.586210  607523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:52:57.586277  607523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:52:57.614440  607523 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 11:52:57.614460  607523 start.go:496] detecting cgroup driver to use...
	I1213 11:52:57.614492  607523 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:52:57.614539  607523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:52:57.632118  607523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:52:57.645277  607523 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:52:57.645361  607523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:52:57.663447  607523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:52:57.682384  607523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:52:57.805277  607523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:52:57.932514  607523 docker.go:234] disabling docker service ...
	I1213 11:52:57.932589  607523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:52:57.955202  607523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:52:57.968354  607523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:52:58.113128  607523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:52:58.247772  607523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:52:58.262298  607523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:52:58.277400  607523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 11:52:58.277526  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.287200  607523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:52:58.287335  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.296697  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.305672  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.315083  607523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:52:58.324248  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.333206  607523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.346564  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.355703  607523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:52:58.363253  607523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:52:58.370805  607523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:52:58.492125  607523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:52:58.663207  607523 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:52:58.663336  607523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:52:58.667219  607523 start.go:564] Will wait 60s for crictl version
	I1213 11:52:58.667334  607523 ssh_runner.go:195] Run: which crictl
	I1213 11:52:58.671116  607523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:52:58.697501  607523 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:52:58.697619  607523 ssh_runner.go:195] Run: crio --version
	I1213 11:52:58.733197  607523 ssh_runner.go:195] Run: crio --version
	I1213 11:52:58.768647  607523 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 11:52:58.771459  607523 cli_runner.go:164] Run: docker network inspect newest-cni-800979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:52:58.789274  607523 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1213 11:52:58.795116  607523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:52:58.812164  607523 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 11:52:58.814926  607523 kubeadm.go:884] updating cluster {Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:52:58.815100  607523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:58.815179  607523 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:52:58.855416  607523 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:52:58.855438  607523 crio.go:433] Images already preloaded, skipping extraction
	I1213 11:52:58.855493  607523 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:52:58.882823  607523 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:52:58.882846  607523 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:52:58.882855  607523 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 11:52:58.882940  607523 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-800979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:52:58.883028  607523 ssh_runner.go:195] Run: crio config
	I1213 11:52:58.937332  607523 cni.go:84] Creating CNI manager for ""
	I1213 11:52:58.937355  607523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:52:58.937377  607523 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 11:52:58.937402  607523 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-800979 NodeName:newest-cni-800979 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:52:58.937530  607523 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-800979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:52:58.937607  607523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:52:58.945256  607523 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:52:58.945332  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:52:58.952916  607523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 11:52:58.965421  607523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:52:58.978594  607523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1213 11:52:58.991343  607523 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:52:58.994981  607523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:52:59.006043  607523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:52:59.120731  607523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:52:59.136632  607523 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979 for IP: 192.168.94.2
	I1213 11:52:59.136650  607523 certs.go:195] generating shared ca certs ...
	I1213 11:52:59.136667  607523 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.136813  607523 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:52:59.136864  607523 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:52:59.136875  607523 certs.go:257] generating profile certs ...
	I1213 11:52:59.136930  607523 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.key
	I1213 11:52:59.136948  607523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.crt with IP's: []
	I1213 11:52:59.229537  607523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.crt ...
	I1213 11:52:59.229569  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.crt: {Name:mk69c62c6a65f19f1e9ae6f6006b84310e5ca69f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.229797  607523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.key ...
	I1213 11:52:59.229813  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.key: {Name:mk0d678e2df0ba46ea7a7d9db0beddac15d16cee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.229927  607523 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606
	I1213 11:52:59.229947  607523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1213 11:52:59.395722  607523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606 ...
	I1213 11:52:59.395753  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606: {Name:mk2f0d7037f2191b2fb310c8e6e39abce6919307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.395933  607523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606 ...
	I1213 11:52:59.395948  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606: {Name:mkeda4d05cf7f14a6919666348bb90fff24821e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.396035  607523 certs.go:382] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606 -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt
	I1213 11:52:59.396122  607523 certs.go:386] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606 -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key
	I1213 11:52:59.396187  607523 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key
	I1213 11:52:59.396205  607523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt with IP's: []
	I1213 11:52:59.677399  607523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt ...
	I1213 11:52:59.677431  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt: {Name:mk4f6f44ef9664fbc510805af3a0a5d8216b34d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.677617  607523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key ...
	I1213 11:52:59.677634  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key: {Name:mk08e1a717d212a6e36443fd4449253d4dfd4e34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.677867  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:52:59.677925  607523 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:52:59.677936  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:52:59.677963  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:52:59.677989  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:52:59.678018  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:52:59.678067  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:52:59.678646  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:52:59.697504  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:52:59.715937  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:52:59.733272  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:52:59.751842  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:52:59.769868  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:52:59.787032  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:52:59.804197  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:52:59.822307  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:52:59.840119  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:52:59.857580  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:52:59.875033  607523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:52:59.887226  607523 ssh_runner.go:195] Run: openssl version
	I1213 11:52:59.893568  607523 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.900683  607523 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:52:59.907927  607523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.911699  607523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.911785  607523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.952546  607523 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:59.959999  607523 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3563282.pem /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:59.967191  607523 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:59.974551  607523 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:52:59.981936  607523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:59.985667  607523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:59.985735  607523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:53:00.029636  607523 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:53:00.039949  607523 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 11:53:00.051259  607523 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.062203  607523 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:53:00.071922  607523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.077479  607523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.077644  607523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.129667  607523 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:53:00.145873  607523 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/356328.pem /etc/ssl/certs/51391683.0
	I1213 11:53:00.165719  607523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:53:00.182484  607523 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 11:53:00.182650  607523 kubeadm.go:401] StartCluster: {Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:53:00.191964  607523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:53:00.192781  607523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:53:00.308764  607523 cri.go:89] found id: ""
	I1213 11:53:00.308851  607523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:53:00.339801  607523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:53:00.369102  607523 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:53:00.369171  607523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:53:00.383298  607523 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:53:00.383367  607523 kubeadm.go:158] found existing configuration files:
	
	I1213 11:53:00.383424  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:53:00.395580  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:53:00.395656  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:53:00.405571  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:53:00.415778  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:53:00.415854  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:53:00.424800  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:53:00.434079  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:53:00.434162  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:53:00.443040  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:53:00.452144  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:53:00.452246  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:53:00.461542  607523 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:53:00.503183  607523 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:53:00.503307  607523 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:53:00.580961  607523 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:53:00.581064  607523 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:53:00.581117  607523 kubeadm.go:319] OS: Linux
	I1213 11:53:00.581167  607523 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:53:00.581226  607523 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:53:00.581277  607523 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:53:00.581327  607523 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:53:00.581379  607523 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:53:00.581429  607523 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:53:00.581478  607523 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:53:00.581529  607523 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:53:00.581581  607523 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:53:00.654422  607523 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:53:00.654539  607523 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:53:00.654635  607523 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:53:00.667854  607523 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:53:00.673949  607523 out.go:252]   - Generating certificates and keys ...
	I1213 11:53:00.674119  607523 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:53:00.674229  607523 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:53:00.749466  607523 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:53:00.853085  607523 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:53:01.087749  607523 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:53:01.312048  607523 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 11:53:01.513347  607523 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 11:53:01.513768  607523 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1213 11:53:01.838749  607523 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 11:53:01.839657  607523 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1213 11:53:02.478657  607523 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 11:53:02.876105  607523 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 11:53:03.010338  607523 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 11:53:03.010418  607523 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:53:03.200889  607523 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:53:03.653890  607523 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:53:04.344965  607523 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:53:04.580887  607523 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:53:04.785257  607523 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:53:04.787179  607523 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:53:04.796409  607523 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:53:04.799699  607523 out.go:252]   - Booting up control plane ...
	I1213 11:53:04.799829  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:53:04.799918  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:53:04.803001  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:53:04.836757  607523 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:53:04.837037  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:53:04.849469  607523 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:53:04.850109  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:53:04.853862  607523 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:53:05.015188  607523 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:53:05.015326  607523 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:57:05.013826  607523 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000267538s
	I1213 11:57:05.013870  607523 kubeadm.go:319] 
	I1213 11:57:05.013935  607523 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:57:05.013971  607523 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:57:05.014088  607523 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:57:05.014096  607523 kubeadm.go:319] 
	I1213 11:57:05.014210  607523 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:57:05.014246  607523 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:57:05.014279  607523 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:57:05.014287  607523 kubeadm.go:319] 
	I1213 11:57:05.020057  607523 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:57:05.020490  607523 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:57:05.020604  607523 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:57:05.020844  607523 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:57:05.020856  607523 kubeadm.go:319] 
	I1213 11:57:05.020925  607523 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 11:57:05.021047  607523 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000267538s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000267538s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 11:57:05.021134  607523 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 11:57:05.432952  607523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:57:05.445933  607523 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:57:05.446023  607523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:57:05.454556  607523 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:57:05.454578  607523 kubeadm.go:158] found existing configuration files:
	
	I1213 11:57:05.454629  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:57:05.462597  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:57:05.462670  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:57:05.470456  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:57:05.478316  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:57:05.478382  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:57:05.485947  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:57:05.494252  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:57:05.494320  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:57:05.502133  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:57:05.510237  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:57:05.510311  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:57:05.518001  607523 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:57:05.584840  607523 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:57:05.585142  607523 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:57:05.657959  607523 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:57:05.658125  607523 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:57:05.658198  607523 kubeadm.go:319] OS: Linux
	I1213 11:57:05.658288  607523 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:57:05.658378  607523 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:57:05.658471  607523 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:57:05.658558  607523 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:57:05.658635  607523 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:57:05.658730  607523 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:57:05.658813  607523 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:57:05.658915  607523 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:57:05.659000  607523 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:57:05.731597  607523 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:57:05.731775  607523 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:57:05.731903  607523 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:57:05.740855  607523 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:57:05.744423  607523 out.go:252]   - Generating certificates and keys ...
	I1213 11:57:05.744578  607523 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:57:05.744679  607523 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:57:05.744796  607523 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 11:57:05.744887  607523 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 11:57:05.744992  607523 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 11:57:05.745076  607523 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 11:57:05.745170  607523 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 11:57:05.745499  607523 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 11:57:05.745582  607523 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 11:57:05.745655  607523 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 11:57:05.745694  607523 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 11:57:05.745749  607523 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:57:05.913677  607523 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:57:06.384962  607523 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:57:07.036559  607523 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:57:07.437110  607523 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:57:07.602655  607523 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:57:07.603483  607523 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:57:07.607251  607523 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:57:07.612344  607523 out.go:252]   - Booting up control plane ...
	I1213 11:57:07.612453  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:57:07.612542  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:57:07.612663  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:57:07.626734  607523 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:57:07.627071  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:57:07.634285  607523 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:57:07.634609  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:57:07.634655  607523 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:57:07.773578  607523 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:57:07.773700  607523 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 12:01:07.773320  607523 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000195913s
	I1213 12:01:07.773347  607523 kubeadm.go:319] 
	I1213 12:01:07.773405  607523 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 12:01:07.773438  607523 kubeadm.go:319] 	- The kubelet is not running
	I1213 12:01:07.773542  607523 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 12:01:07.773547  607523 kubeadm.go:319] 
	I1213 12:01:07.773652  607523 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 12:01:07.773685  607523 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 12:01:07.773715  607523 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 12:01:07.773720  607523 kubeadm.go:319] 
	I1213 12:01:07.777876  607523 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 12:01:07.778275  607523 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 12:01:07.778377  607523 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 12:01:07.778624  607523 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 12:01:07.778630  607523 kubeadm.go:319] 
	I1213 12:01:07.778695  607523 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 12:01:07.778746  607523 kubeadm.go:403] duration metric: took 8m7.596100369s to StartCluster
	I1213 12:01:07.778786  607523 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:01:07.778843  607523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:01:07.814673  607523 cri.go:89] found id: ""
	I1213 12:01:07.814694  607523 logs.go:282] 0 containers: []
	W1213 12:01:07.814703  607523 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:01:07.814709  607523 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:01:07.814771  607523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:01:07.872169  607523 cri.go:89] found id: ""
	I1213 12:01:07.872191  607523 logs.go:282] 0 containers: []
	W1213 12:01:07.872199  607523 logs.go:284] No container was found matching "etcd"
	I1213 12:01:07.872205  607523 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:01:07.872262  607523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:01:07.897159  607523 cri.go:89] found id: ""
	I1213 12:01:07.897183  607523 logs.go:282] 0 containers: []
	W1213 12:01:07.897192  607523 logs.go:284] No container was found matching "coredns"
	I1213 12:01:07.897198  607523 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:01:07.897271  607523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:01:07.926240  607523 cri.go:89] found id: ""
	I1213 12:01:07.926266  607523 logs.go:282] 0 containers: []
	W1213 12:01:07.926275  607523 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:01:07.926285  607523 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:01:07.926342  607523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:01:07.954071  607523 cri.go:89] found id: ""
	I1213 12:01:07.954144  607523 logs.go:282] 0 containers: []
	W1213 12:01:07.954168  607523 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:01:07.954187  607523 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:01:07.954259  607523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:01:07.980272  607523 cri.go:89] found id: ""
	I1213 12:01:07.980300  607523 logs.go:282] 0 containers: []
	W1213 12:01:07.980310  607523 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:01:07.980316  607523 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:01:07.980371  607523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:01:08.011383  607523 cri.go:89] found id: ""
	I1213 12:01:08.011411  607523 logs.go:282] 0 containers: []
	W1213 12:01:08.011421  607523 logs.go:284] No container was found matching "kindnet"
	I1213 12:01:08.011431  607523 logs.go:123] Gathering logs for kubelet ...
	I1213 12:01:08.011442  607523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:01:08.079910  607523 logs.go:123] Gathering logs for dmesg ...
	I1213 12:01:08.079950  607523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:01:08.097373  607523 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:01:08.097401  607523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:01:08.160941  607523 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:01:08.153055    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.153840    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.155465    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.155845    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.157368    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:01:08.153055    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.153840    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.155465    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.155845    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.157368    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:01:08.161010  607523 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:01:08.161029  607523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:01:08.192670  607523 logs.go:123] Gathering logs for container status ...
	I1213 12:01:08.192707  607523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:01:08.220898  607523 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000195913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 12:01:08.220962  607523 out.go:285] * 
	* 
	W1213 12:01:08.221021  607523 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000195913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000195913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 12:01:08.221042  607523 out.go:285] * 
	* 
	W1213 12:01:08.223167  607523 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 12:01:08.228262  607523 out.go:203] 
	W1213 12:01:08.230390  607523 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000195913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000195913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 12:01:08.230436  607523 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 12:01:08.230456  607523 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 12:01:08.233619  607523 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p newest-cni-800979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-800979
helpers_test.go:244: (dbg) docker inspect newest-cni-800979:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef",
	        "Created": "2025-12-13T11:52:51.619651061Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 608187,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:52:51.70884903Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef/hosts",
	        "LogPath": "/var/lib/docker/containers/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef-json.log",
	        "Name": "/newest-cni-800979",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-800979:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-800979",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef",
	                "LowerDir": "/var/lib/docker/overlay2/c7d2cc87bdf8f5a9a60e544f17bca9528f6384a57e9d470177b306242d8113d5-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7d2cc87bdf8f5a9a60e544f17bca9528f6384a57e9d470177b306242d8113d5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7d2cc87bdf8f5a9a60e544f17bca9528f6384a57e9d470177b306242d8113d5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7d2cc87bdf8f5a9a60e544f17bca9528f6384a57e9d470177b306242d8113d5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-800979",
	                "Source": "/var/lib/docker/volumes/newest-cni-800979/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-800979",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-800979",
	                "name.minikube.sigs.k8s.io": "newest-cni-800979",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "05cea40e8c1eaa213015e5d86b7630be51a595e18678344c509541c6234a6461",
	            "SandboxKey": "/var/run/docker/netns/05cea40e8c1e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-800979": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:d0:81:44:f6:85",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de59fc08c8081b0c37df8bacf82db2ccccb307596588e9c22d7d094938935e3c",
	                    "EndpointID": "748f656075b24b4919ccd977616a9f21ba5987f640fc9fc2eca0de1a70fbf555",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-800979",
	                        "4aef671a766b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-800979 -n newest-cni-800979
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-800979 -n newest-cni-800979: exit status 6 (349.800117ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 12:01:08.664820  618304 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-800979" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-800979 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p old-k8s-version-051699                                                                                                                                                                                                                            │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p cert-expiration-420007 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                            │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ delete  │ -p cert-expiration-420007                                                                                                                                                                                                                            │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-151605 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-151605 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-151605 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p embed-certs-326948 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │                     │
	│ stop    │ -p embed-certs-326948 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-326948 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:52 UTC │
	│ image   │ default-k8s-diff-port-151605 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p default-k8s-diff-port-151605 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p disable-driver-mounts-072590                                                                                                                                                                                                                      │ disable-driver-mounts-072590 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ image   │ embed-certs-326948 image list --format=json                                                                                                                                                                                                          │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p embed-certs-326948 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p embed-certs-326948                                                                                                                                                                                                                                │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p embed-certs-326948                                                                                                                                                                                                                                │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p newest-cni-800979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-307409 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:00 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:52:44
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:52:44.222945  607523 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:52:44.223057  607523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:44.223099  607523 out.go:374] Setting ErrFile to fd 2...
	I1213 11:52:44.223106  607523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:44.223364  607523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:52:44.223812  607523 out.go:368] Setting JSON to false
	I1213 11:52:44.224724  607523 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12917,"bootTime":1765613848,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:52:44.224797  607523 start.go:143] virtualization:  
	I1213 11:52:44.228935  607523 out.go:179] * [newest-cni-800979] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:52:44.232087  607523 notify.go:221] Checking for updates...
	I1213 11:52:44.232862  607523 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:52:44.236046  607523 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:52:44.241086  607523 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:52:44.244482  607523 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:52:44.247343  607523 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:52:44.250267  607523 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:52:44.253709  607523 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:52:44.253853  607523 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:52:44.284666  607523 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:52:44.284774  607523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:52:44.401910  607523 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-13 11:52:44.38729859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:52:44.402031  607523 docker.go:319] overlay module found
	I1213 11:52:44.405585  607523 out.go:179] * Using the docker driver based on user configuration
	I1213 11:52:44.408428  607523 start.go:309] selected driver: docker
	I1213 11:52:44.408454  607523 start.go:927] validating driver "docker" against <nil>
	I1213 11:52:44.408468  607523 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:52:44.409713  607523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:52:44.548406  607523 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-13 11:52:44.53777287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:52:44.548555  607523 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 11:52:44.548581  607523 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 11:52:44.549476  607523 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 11:52:44.552258  607523 out.go:179] * Using Docker driver with root privileges
	I1213 11:52:44.555279  607523 cni.go:84] Creating CNI manager for ""
	I1213 11:52:44.555356  607523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:52:44.555365  607523 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 11:52:44.555448  607523 start.go:353] cluster config:
	{Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:52:44.558889  607523 out.go:179] * Starting "newest-cni-800979" primary control-plane node in "newest-cni-800979" cluster
	I1213 11:52:44.561893  607523 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:52:44.564946  607523 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:52:44.567939  607523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:44.568029  607523 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 11:52:44.568050  607523 cache.go:65] Caching tarball of preloaded images
	I1213 11:52:44.568145  607523 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 11:52:44.568156  607523 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 11:52:44.568295  607523 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json ...
	I1213 11:52:44.568315  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json: {Name:mkca051d0f4222f12ada2e542e9765aa1caaa1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:44.568460  607523 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:52:44.614235  607523 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:52:44.614511  607523 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:52:44.614568  607523 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:52:44.614617  607523 start.go:360] acquireMachinesLock for newest-cni-800979: {Name:mk98646479cdf6b123b7b6024833c6594650d415 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:44.614732  607523 start.go:364] duration metric: took 92.595µs to acquireMachinesLock for "newest-cni-800979"
	I1213 11:52:44.614763  607523 start.go:93] Provisioning new machine with config: &{Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:52:44.614850  607523 start.go:125] createHost starting for "" (driver="docker")
	I1213 11:52:43.447904  603921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.748996566s)
	I1213 11:52:43.447934  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1213 11:52:43.447952  603921 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1213 11:52:43.448001  603921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1213 11:52:44.178615  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1213 11:52:44.178655  603921 cache_images.go:125] Successfully loaded all cached images
	I1213 11:52:44.178662  603921 cache_images.go:94] duration metric: took 13.878753268s to LoadCachedImages
	I1213 11:52:44.178674  603921 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 11:52:44.178763  603921 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-307409 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:52:44.178851  603921 ssh_runner.go:195] Run: crio config
	I1213 11:52:44.242383  603921 cni.go:84] Creating CNI manager for ""
	I1213 11:52:44.242401  603921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:52:44.242418  603921 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:52:44.242441  603921 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-307409 NodeName:no-preload-307409 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:52:44.242555  603921 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-307409"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:52:44.242622  603921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:52:44.254521  603921 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1213 11:52:44.254582  603921 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:52:44.274613  603921 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1213 11:52:44.274705  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1213 11:52:44.275568  603921 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet
	I1213 11:52:44.278466  603921 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm
	I1213 11:52:44.279131  603921 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1213 11:52:44.279162  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1213 11:52:45.122331  603921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:52:45.166456  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1213 11:52:45.191725  603921 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1213 11:52:45.191781  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1213 11:52:45.304315  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1213 11:52:45.334054  603921 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1213 11:52:45.334112  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1213 11:52:46.015388  603921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:52:46.024888  603921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 11:52:46.040762  603921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:52:46.056856  603921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 11:52:46.080441  603921 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:52:46.084885  603921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:52:46.097815  603921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:52:46.230479  603921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:52:46.251958  603921 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409 for IP: 192.168.85.2
	I1213 11:52:46.251982  603921 certs.go:195] generating shared ca certs ...
	I1213 11:52:46.251998  603921 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:46.252212  603921 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:52:46.252287  603921 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:52:46.252302  603921 certs.go:257] generating profile certs ...
	I1213 11:52:46.252373  603921 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key
	I1213 11:52:46.252392  603921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.crt with IP's: []
	I1213 11:52:46.687159  603921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.crt ...
	I1213 11:52:46.687196  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.crt: {Name:mkd3b6de93eb4d0d7c38606e110ec8041a7a8b50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:46.687382  603921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key ...
	I1213 11:52:46.687530  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key: {Name:mk69f4e38edb3a6758b30b8919bec09ed6524780 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:46.687680  603921 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b
	I1213 11:52:46.687705  603921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 11:52:47.101196  603921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b ...
	I1213 11:52:47.101275  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b: {Name:mkf348306e6448fd779f0c40568bfbc2591db27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.101515  603921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b ...
	I1213 11:52:47.101554  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b: {Name:mk67006fcc87c7852dc9dd2baf2e5c091f89fb64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.101697  603921 certs.go:382] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt
	I1213 11:52:47.101816  603921 certs.go:386] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key
	I1213 11:52:47.101906  603921 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key
	I1213 11:52:47.101964  603921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt with IP's: []
	I1213 11:52:47.391626  603921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt ...
	I1213 11:52:47.391702  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt: {Name:mk6bf9ff3c46be8a69edc887a1d740e84c930536 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.391910  603921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key ...
	I1213 11:52:47.391946  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key: {Name:mk5282a1a4966c51394d6aeb663ae12cef8b3a1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.392186  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:52:47.392256  603921 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:52:47.392281  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:52:47.392345  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:52:47.392401  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:52:47.392449  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:52:47.392534  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:52:47.393177  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:52:47.413169  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:52:47.433634  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:52:47.456446  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:52:47.475453  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:52:47.495921  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 11:52:47.516359  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:52:47.533557  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:52:47.553686  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:52:47.576528  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:52:47.595023  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:52:47.617574  603921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:52:47.632766  603921 ssh_runner.go:195] Run: openssl version
	I1213 11:52:47.642255  603921 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.651062  603921 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:52:47.660280  603921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.665117  603921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.665212  603921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.711366  603921 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:52:47.719094  603921 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 11:52:47.727218  603921 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.735147  603921 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:52:47.743430  603921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.748386  603921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.748477  603921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.811036  603921 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:52:47.824172  603921 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/356328.pem /etc/ssl/certs/51391683.0
	I1213 11:52:47.833720  603921 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.842937  603921 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:52:47.852257  603921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.857336  603921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.857459  603921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.913987  603921 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:47.923742  603921 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3563282.pem /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:47.932105  603921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:52:47.937831  603921 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 11:52:47.937953  603921 kubeadm.go:401] StartCluster: {Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:52:47.938056  603921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:52:47.938131  603921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:52:47.977617  603921 cri.go:89] found id: ""
	I1213 11:52:47.977734  603921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:52:47.986677  603921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:52:47.995428  603921 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:52:47.995568  603921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:52:48.012929  603921 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:52:48.013001  603921 kubeadm.go:158] found existing configuration files:
	
	I1213 11:52:48.013078  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:52:48.023587  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:52:48.023720  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:52:48.033048  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:52:48.042898  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:52:48.043030  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:52:48.052336  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:52:48.062442  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:52:48.062560  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:52:48.071404  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:52:48.081302  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:52:48.081415  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:52:48.090412  603921 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:52:48.139895  603921 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:52:48.140310  603921 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:52:48.244346  603921 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:52:48.244445  603921 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:52:48.244514  603921 kubeadm.go:319] OS: Linux
	I1213 11:52:48.244581  603921 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:52:48.244649  603921 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:52:48.244717  603921 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:52:48.244785  603921 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:52:48.244849  603921 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:52:48.244917  603921 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:52:48.244983  603921 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:52:48.245052  603921 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:52:48.245113  603921 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:52:48.326956  603921 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:52:48.327125  603921 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:52:48.327254  603921 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:52:48.353781  603921 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:52:44.618660  607523 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 11:52:44.618986  607523 start.go:159] libmachine.API.Create for "newest-cni-800979" (driver="docker")
	I1213 11:52:44.619024  607523 client.go:173] LocalClient.Create starting
	I1213 11:52:44.619095  607523 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem
	I1213 11:52:44.619134  607523 main.go:143] libmachine: Decoding PEM data...
	I1213 11:52:44.619169  607523 main.go:143] libmachine: Parsing certificate...
	I1213 11:52:44.619234  607523 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem
	I1213 11:52:44.619259  607523 main.go:143] libmachine: Decoding PEM data...
	I1213 11:52:44.619275  607523 main.go:143] libmachine: Parsing certificate...
	I1213 11:52:44.619828  607523 cli_runner.go:164] Run: docker network inspect newest-cni-800979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 11:52:44.681886  607523 cli_runner.go:211] docker network inspect newest-cni-800979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 11:52:44.682019  607523 network_create.go:284] running [docker network inspect newest-cni-800979] to gather additional debugging logs...
	I1213 11:52:44.682044  607523 cli_runner.go:164] Run: docker network inspect newest-cni-800979
	W1213 11:52:44.783263  607523 cli_runner.go:211] docker network inspect newest-cni-800979 returned with exit code 1
	I1213 11:52:44.783303  607523 network_create.go:287] error running [docker network inspect newest-cni-800979]: docker network inspect newest-cni-800979: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-800979 not found
	I1213 11:52:44.783456  607523 network_create.go:289] output of [docker network inspect newest-cni-800979]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-800979 not found
	
	** /stderr **
	I1213 11:52:44.783853  607523 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:52:44.869365  607523 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0545902499c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:32:4c:cb:8d:7b} reservation:<nil>}
	I1213 11:52:44.869936  607523 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-de5fe2fbe3b8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:54:47:7f:e7:3a} reservation:<nil>}
	I1213 11:52:44.870324  607523 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7c96683190e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:0a:60:46:c5:4a} reservation:<nil>}
	I1213 11:52:44.872231  607523 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 11:52:44.872625  607523 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-280e424abad6 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:5e:ad:5b:52:ee:cb} reservation:<nil>}
	I1213 11:52:44.873100  607523 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a0a730}
	I1213 11:52:44.873121  607523 network_create.go:124] attempt to create docker network newest-cni-800979 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1213 11:52:44.873186  607523 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-800979 newest-cni-800979
	I1213 11:52:45.033952  607523 network_create.go:108] docker network newest-cni-800979 192.168.94.0/24 created
	I1213 11:52:45.033989  607523 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-800979" container
	I1213 11:52:45.034089  607523 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 11:52:45.110922  607523 cli_runner.go:164] Run: docker volume create newest-cni-800979 --label name.minikube.sigs.k8s.io=newest-cni-800979 --label created_by.minikube.sigs.k8s.io=true
	I1213 11:52:45.147181  607523 oci.go:103] Successfully created a docker volume newest-cni-800979
	I1213 11:52:45.148756  607523 cli_runner.go:164] Run: docker run --rm --name newest-cni-800979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800979 --entrypoint /usr/bin/test -v newest-cni-800979:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 11:52:46.576150  607523 cli_runner.go:217] Completed: docker run --rm --name newest-cni-800979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800979 --entrypoint /usr/bin/test -v newest-cni-800979:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.427287827s)
	I1213 11:52:46.576182  607523 oci.go:107] Successfully prepared a docker volume newest-cni-800979
	I1213 11:52:46.576222  607523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:46.576231  607523 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 11:52:46.576286  607523 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-800979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 11:52:48.362615  603921 out.go:252]   - Generating certificates and keys ...
	I1213 11:52:48.362749  603921 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:52:48.362861  603921 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:52:48.406340  603921 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:52:48.617898  603921 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:52:48.894950  603921 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:52:49.002897  603921 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 11:52:49.595632  603921 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 11:52:49.596022  603921 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 11:52:49.703067  603921 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 11:52:49.703500  603921 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 11:52:49.852748  603921 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 11:52:49.985441  603921 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 11:52:50.361702  603921 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 11:52:50.362007  603921 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:52:50.448441  603921 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:52:50.524868  603921 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:52:51.254957  603921 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:52:51.473347  603921 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:52:51.686418  603921 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:52:51.686517  603921 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:52:51.690277  603921 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:52:51.694117  603921 out.go:252]   - Booting up control plane ...
	I1213 11:52:51.694231  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:52:51.694310  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:52:51.695018  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:52:51.714016  603921 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:52:51.714689  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:52:51.728439  603921 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:52:51.728548  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:52:51.728589  603921 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:52:51.918802  603921 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:52:51.918928  603921 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:52:51.477960  607523 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-800979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.901639858s)
	I1213 11:52:51.478004  607523 kic.go:203] duration metric: took 4.901755297s to extract preloaded images to volume ...
	W1213 11:52:51.478154  607523 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 11:52:51.478257  607523 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 11:52:51.600099  607523 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-800979 --name newest-cni-800979 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800979 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-800979 --network newest-cni-800979 --ip 192.168.94.2 --volume newest-cni-800979:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 11:52:52.003446  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Running}}
	I1213 11:52:52.025630  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 11:52:52.044945  607523 cli_runner.go:164] Run: docker exec newest-cni-800979 stat /var/lib/dpkg/alternatives/iptables
	I1213 11:52:52.103780  607523 oci.go:144] the created container "newest-cni-800979" has a running status.
	I1213 11:52:52.103827  607523 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa...
	I1213 11:52:52.454986  607523 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 11:52:52.499855  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 11:52:52.520167  607523 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 11:52:52.520186  607523 kic_runner.go:114] Args: [docker exec --privileged newest-cni-800979 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 11:52:52.595209  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 11:52:52.616614  607523 machine.go:94] provisionDockerMachine start ...
	I1213 11:52:52.616710  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:52.645695  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:52.646054  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:52.646065  607523 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:52:52.646853  607523 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49104->127.0.0.1:33463: read: connection reset by peer
	I1213 11:52:55.795509  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-800979
	
	I1213 11:52:55.795546  607523 ubuntu.go:182] provisioning hostname "newest-cni-800979"
	I1213 11:52:55.795609  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:55.823768  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:55.824086  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:55.824105  607523 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-800979 && echo "newest-cni-800979" | sudo tee /etc/hostname
	I1213 11:52:55.984531  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-800979
	
	I1213 11:52:55.984627  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:56.004427  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:56.004789  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:56.004806  607523 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-800979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-800979/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-800979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:52:56.155779  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:52:56.155809  607523 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 11:52:56.155840  607523 ubuntu.go:190] setting up certificates
	I1213 11:52:56.155849  607523 provision.go:84] configureAuth start
	I1213 11:52:56.155916  607523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 11:52:56.173051  607523 provision.go:143] copyHostCerts
	I1213 11:52:56.173126  607523 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 11:52:56.173140  607523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 11:52:56.173218  607523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 11:52:56.173314  607523 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 11:52:56.173326  607523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 11:52:56.173354  607523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 11:52:56.173407  607523 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 11:52:56.173416  607523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 11:52:56.173440  607523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 11:52:56.173493  607523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.newest-cni-800979 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-800979]
	I1213 11:52:56.495741  607523 provision.go:177] copyRemoteCerts
	I1213 11:52:56.495819  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:52:56.495860  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:56.513776  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:56.623272  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:52:56.640893  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:52:56.658251  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:52:56.675898  607523 provision.go:87] duration metric: took 520.035144ms to configureAuth
	I1213 11:52:56.675924  607523 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:52:56.676119  607523 config.go:182] Loaded profile config "newest-cni-800979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:52:56.676229  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:56.693573  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:56.693885  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:56.693913  607523 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:52:57.000433  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:52:57.000459  607523 machine.go:97] duration metric: took 4.383824523s to provisionDockerMachine
	I1213 11:52:57.000471  607523 client.go:176] duration metric: took 12.381437402s to LocalClient.Create
	I1213 11:52:57.000485  607523 start.go:167] duration metric: took 12.381502329s to libmachine.API.Create "newest-cni-800979"
	I1213 11:52:57.000493  607523 start.go:293] postStartSetup for "newest-cni-800979" (driver="docker")
	I1213 11:52:57.000506  607523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:52:57.000573  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:52:57.000635  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.019654  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.123498  607523 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:52:57.126887  607523 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:52:57.126915  607523 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:52:57.126942  607523 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 11:52:57.127003  607523 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 11:52:57.127090  607523 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 11:52:57.127193  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:52:57.134628  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:52:57.153601  607523 start.go:296] duration metric: took 153.093637ms for postStartSetup
	I1213 11:52:57.154022  607523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 11:52:57.174170  607523 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json ...
	I1213 11:52:57.174465  607523 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:52:57.174516  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.191003  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.300652  607523 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:52:57.305941  607523 start.go:128] duration metric: took 12.691075107s to createHost
	I1213 11:52:57.305969  607523 start.go:83] releasing machines lock for "newest-cni-800979", held for 12.691222882s
	I1213 11:52:57.306067  607523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 11:52:57.324383  607523 ssh_runner.go:195] Run: cat /version.json
	I1213 11:52:57.324411  607523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:52:57.324436  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.324473  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.349379  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.349454  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.540188  607523 ssh_runner.go:195] Run: systemctl --version
	I1213 11:52:57.546743  607523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:52:57.581981  607523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:52:57.586210  607523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:52:57.586277  607523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:52:57.614440  607523 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 11:52:57.614460  607523 start.go:496] detecting cgroup driver to use...
	I1213 11:52:57.614492  607523 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:52:57.614539  607523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:52:57.632118  607523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:52:57.645277  607523 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:52:57.645361  607523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:52:57.663447  607523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:52:57.682384  607523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:52:57.805277  607523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:52:57.932514  607523 docker.go:234] disabling docker service ...
	I1213 11:52:57.932589  607523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:52:57.955202  607523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:52:57.968354  607523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:52:58.113128  607523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:52:58.247772  607523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:52:58.262298  607523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:52:58.277400  607523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 11:52:58.277526  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.287200  607523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:52:58.287335  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.296697  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.305672  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.315083  607523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:52:58.324248  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.333206  607523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.346564  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.355703  607523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:52:58.363253  607523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:52:58.370805  607523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:52:58.492125  607523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:52:58.663207  607523 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:52:58.663336  607523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:52:58.667219  607523 start.go:564] Will wait 60s for crictl version
	I1213 11:52:58.667334  607523 ssh_runner.go:195] Run: which crictl
	I1213 11:52:58.671116  607523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:52:58.697501  607523 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:52:58.697619  607523 ssh_runner.go:195] Run: crio --version
	I1213 11:52:58.733197  607523 ssh_runner.go:195] Run: crio --version
	I1213 11:52:58.768647  607523 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 11:52:58.771459  607523 cli_runner.go:164] Run: docker network inspect newest-cni-800979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:52:58.789274  607523 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1213 11:52:58.795116  607523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:52:58.812164  607523 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 11:52:58.814926  607523 kubeadm.go:884] updating cluster {Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:52:58.815100  607523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:58.815179  607523 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:52:58.855416  607523 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:52:58.855438  607523 crio.go:433] Images already preloaded, skipping extraction
	I1213 11:52:58.855493  607523 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:52:58.882823  607523 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:52:58.882846  607523 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:52:58.882855  607523 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 11:52:58.882940  607523 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-800979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:52:58.883028  607523 ssh_runner.go:195] Run: crio config
	I1213 11:52:58.937332  607523 cni.go:84] Creating CNI manager for ""
	I1213 11:52:58.937355  607523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:52:58.937377  607523 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 11:52:58.937402  607523 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-800979 NodeName:newest-cni-800979 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:52:58.937530  607523 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-800979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:52:58.937607  607523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:52:58.945256  607523 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:52:58.945332  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:52:58.952916  607523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 11:52:58.965421  607523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:52:58.978594  607523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1213 11:52:58.991343  607523 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:52:58.994981  607523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:52:59.006043  607523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:52:59.120731  607523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:52:59.136632  607523 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979 for IP: 192.168.94.2
	I1213 11:52:59.136650  607523 certs.go:195] generating shared ca certs ...
	I1213 11:52:59.136667  607523 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.136813  607523 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:52:59.136864  607523 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:52:59.136875  607523 certs.go:257] generating profile certs ...
	I1213 11:52:59.136930  607523 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.key
	I1213 11:52:59.136948  607523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.crt with IP's: []
	I1213 11:52:59.229537  607523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.crt ...
	I1213 11:52:59.229569  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.crt: {Name:mk69c62c6a65f19f1e9ae6f6006b84310e5ca69f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.229797  607523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.key ...
	I1213 11:52:59.229813  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.key: {Name:mk0d678e2df0ba46ea7a7d9db0beddac15d16cee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.229927  607523 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606
	I1213 11:52:59.229947  607523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1213 11:52:59.395722  607523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606 ...
	I1213 11:52:59.395753  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606: {Name:mk2f0d7037f2191b2fb310c8e6e39abce6919307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.395933  607523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606 ...
	I1213 11:52:59.395948  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606: {Name:mkeda4d05cf7f14a6919666348bb90fff24821e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.396035  607523 certs.go:382] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606 -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt
	I1213 11:52:59.396122  607523 certs.go:386] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606 -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key
	I1213 11:52:59.396187  607523 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key
	I1213 11:52:59.396205  607523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt with IP's: []
	I1213 11:52:59.677399  607523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt ...
	I1213 11:52:59.677431  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt: {Name:mk4f6f44ef9664fbc510805af3a0a5d8216b34d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.677617  607523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key ...
	I1213 11:52:59.677634  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key: {Name:mk08e1a717d212a6e36443fd4449253d4dfd4e34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.677867  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:52:59.677925  607523 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:52:59.677936  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:52:59.677963  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:52:59.677989  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:52:59.678018  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:52:59.678067  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:52:59.678646  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:52:59.697504  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:52:59.715937  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:52:59.733272  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:52:59.751842  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:52:59.769868  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:52:59.787032  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:52:59.804197  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:52:59.822307  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:52:59.840119  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:52:59.857580  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:52:59.875033  607523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:52:59.887226  607523 ssh_runner.go:195] Run: openssl version
	I1213 11:52:59.893568  607523 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.900683  607523 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:52:59.907927  607523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.911699  607523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.911785  607523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.952546  607523 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:59.959999  607523 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3563282.pem /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:59.967191  607523 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:59.974551  607523 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:52:59.981936  607523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:59.985667  607523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:59.985735  607523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:53:00.029636  607523 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:53:00.039949  607523 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 11:53:00.051259  607523 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.062203  607523 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:53:00.071922  607523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.077479  607523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.077644  607523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.129667  607523 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:53:00.145873  607523 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/356328.pem /etc/ssl/certs/51391683.0
	I1213 11:53:00.165719  607523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:53:00.182484  607523 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 11:53:00.182650  607523 kubeadm.go:401] StartCluster: {Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:53:00.191964  607523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:53:00.192781  607523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:53:00.308764  607523 cri.go:89] found id: ""
	I1213 11:53:00.308851  607523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:53:00.339801  607523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:53:00.369102  607523 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:53:00.369171  607523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:53:00.383298  607523 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:53:00.383367  607523 kubeadm.go:158] found existing configuration files:
	
	I1213 11:53:00.383424  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:53:00.395580  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:53:00.395656  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:53:00.405571  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:53:00.415778  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:53:00.415854  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:53:00.424800  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:53:00.434079  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:53:00.434162  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:53:00.443040  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:53:00.452144  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:53:00.452246  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:53:00.461542  607523 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:53:00.503183  607523 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:53:00.503307  607523 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:53:00.580961  607523 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:53:00.581064  607523 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:53:00.581117  607523 kubeadm.go:319] OS: Linux
	I1213 11:53:00.581167  607523 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:53:00.581226  607523 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:53:00.581277  607523 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:53:00.581327  607523 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:53:00.581379  607523 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:53:00.581429  607523 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:53:00.581478  607523 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:53:00.581529  607523 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:53:00.581581  607523 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:53:00.654422  607523 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:53:00.654539  607523 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:53:00.654635  607523 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:53:00.667854  607523 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:53:00.673949  607523 out.go:252]   - Generating certificates and keys ...
	I1213 11:53:00.674119  607523 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:53:00.674229  607523 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:53:00.749466  607523 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:53:00.853085  607523 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:53:01.087749  607523 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:53:01.312048  607523 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 11:53:01.513347  607523 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 11:53:01.513768  607523 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1213 11:53:01.838749  607523 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 11:53:01.839657  607523 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1213 11:53:02.478657  607523 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 11:53:02.876105  607523 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 11:53:03.010338  607523 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 11:53:03.010418  607523 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:53:03.200889  607523 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:53:03.653890  607523 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:53:04.344965  607523 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:53:04.580887  607523 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:53:04.785257  607523 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:53:04.787179  607523 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:53:04.796409  607523 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:53:04.799699  607523 out.go:252]   - Booting up control plane ...
	I1213 11:53:04.799829  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:53:04.799918  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:53:04.803001  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:53:04.836757  607523 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:53:04.837037  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:53:04.849469  607523 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:53:04.850109  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:53:04.853862  607523 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:53:05.015188  607523 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:53:05.015326  607523 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:56:51.920072  603921 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001224221s
	I1213 11:56:51.920104  603921 kubeadm.go:319] 
	I1213 11:56:51.920212  603921 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:56:51.920270  603921 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:56:51.920608  603921 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:56:51.920619  603921 kubeadm.go:319] 
	I1213 11:56:51.920812  603921 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:56:51.920869  603921 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:56:51.921157  603921 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:56:51.921165  603921 kubeadm.go:319] 
	I1213 11:56:51.925513  603921 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:56:51.926006  603921 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:56:51.926180  603921 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:56:51.926479  603921 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:56:51.926517  603921 kubeadm.go:319] 
	W1213 11:56:51.926771  603921 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001224221s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 11:56:51.926983  603921 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 11:56:51.927241  603921 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:56:52.337349  603921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:56:52.355756  603921 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:56:52.355865  603921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:56:52.364798  603921 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:56:52.364819  603921 kubeadm.go:158] found existing configuration files:
	
	I1213 11:56:52.364872  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:56:52.373016  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:56:52.373085  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:56:52.380868  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:56:52.388839  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:56:52.388908  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:56:52.396493  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:56:52.404428  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:56:52.404492  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:56:52.412543  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:56:52.420710  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:56:52.420784  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:56:52.428931  603921 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:56:52.469486  603921 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:56:52.469812  603921 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:56:52.544538  603921 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:56:52.544634  603921 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:56:52.544691  603921 kubeadm.go:319] OS: Linux
	I1213 11:56:52.544758  603921 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:56:52.544826  603921 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:56:52.544893  603921 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:56:52.544959  603921 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:56:52.545027  603921 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:56:52.545094  603921 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:56:52.545159  603921 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:56:52.545225  603921 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:56:52.545290  603921 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:56:52.613010  603921 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:56:52.613120  603921 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:56:52.613213  603921 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:56:52.631911  603921 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:56:52.635687  603921 out.go:252]   - Generating certificates and keys ...
	I1213 11:56:52.635862  603921 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:56:52.635952  603921 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:56:52.636046  603921 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 11:56:52.636157  603921 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 11:56:52.636251  603921 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 11:56:52.636343  603921 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 11:56:52.636411  603921 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 11:56:52.636489  603921 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 11:56:52.636569  603921 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 11:56:52.636650  603921 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 11:56:52.636696  603921 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 11:56:52.636757  603921 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:56:52.776698  603921 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:56:52.958761  603921 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:56:53.117866  603921 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:56:53.292950  603921 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:56:53.736752  603921 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:56:53.737374  603921 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:56:53.739900  603921 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:56:53.743260  603921 out.go:252]   - Booting up control plane ...
	I1213 11:56:53.743409  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:56:53.743561  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:56:53.743673  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:56:53.757211  603921 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:56:53.757338  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:56:53.765875  603921 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:56:53.766984  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:56:53.767070  603921 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:56:53.918187  603921 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:56:53.918313  603921 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:57:05.013826  607523 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000267538s
	I1213 11:57:05.013870  607523 kubeadm.go:319] 
	I1213 11:57:05.013935  607523 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:57:05.013971  607523 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:57:05.014088  607523 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:57:05.014096  607523 kubeadm.go:319] 
	I1213 11:57:05.014210  607523 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:57:05.014246  607523 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:57:05.014279  607523 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:57:05.014287  607523 kubeadm.go:319] 
	I1213 11:57:05.020057  607523 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:57:05.020490  607523 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:57:05.020604  607523 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:57:05.020844  607523 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:57:05.020856  607523 kubeadm.go:319] 
	I1213 11:57:05.020925  607523 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 11:57:05.021047  607523 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000267538s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 11:57:05.021134  607523 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 11:57:05.432952  607523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:57:05.445933  607523 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:57:05.446023  607523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:57:05.454556  607523 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:57:05.454578  607523 kubeadm.go:158] found existing configuration files:
	
	I1213 11:57:05.454629  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:57:05.462597  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:57:05.462670  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:57:05.470456  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:57:05.478316  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:57:05.478382  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:57:05.485947  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:57:05.494252  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:57:05.494320  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:57:05.502133  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:57:05.510237  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:57:05.510311  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:57:05.518001  607523 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:57:05.584840  607523 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:57:05.585142  607523 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:57:05.657959  607523 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:57:05.658125  607523 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:57:05.658198  607523 kubeadm.go:319] OS: Linux
	I1213 11:57:05.658288  607523 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:57:05.658378  607523 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:57:05.658471  607523 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:57:05.658558  607523 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:57:05.658635  607523 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:57:05.658730  607523 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:57:05.658813  607523 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:57:05.658915  607523 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:57:05.659000  607523 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:57:05.731597  607523 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:57:05.731775  607523 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:57:05.731903  607523 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:57:05.740855  607523 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:57:05.744423  607523 out.go:252]   - Generating certificates and keys ...
	I1213 11:57:05.744578  607523 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:57:05.744679  607523 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:57:05.744796  607523 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 11:57:05.744887  607523 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 11:57:05.744992  607523 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 11:57:05.745076  607523 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 11:57:05.745170  607523 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 11:57:05.745499  607523 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 11:57:05.745582  607523 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 11:57:05.745655  607523 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 11:57:05.745694  607523 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 11:57:05.745749  607523 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:57:05.913677  607523 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:57:06.384962  607523 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:57:07.036559  607523 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:57:07.437110  607523 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:57:07.602655  607523 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:57:07.603483  607523 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:57:07.607251  607523 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:57:07.612344  607523 out.go:252]   - Booting up control plane ...
	I1213 11:57:07.612453  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:57:07.612542  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:57:07.612663  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:57:07.626734  607523 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:57:07.627071  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:57:07.634285  607523 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:57:07.634609  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:57:07.634655  607523 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:57:07.773578  607523 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:57:07.773700  607523 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 12:00:53.918383  603921 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00010332s
	I1213 12:00:53.918411  603921 kubeadm.go:319] 
	I1213 12:00:53.918468  603921 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 12:00:53.918502  603921 kubeadm.go:319] 	- The kubelet is not running
	I1213 12:00:53.918607  603921 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 12:00:53.918611  603921 kubeadm.go:319] 
	I1213 12:00:53.918715  603921 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 12:00:53.918747  603921 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 12:00:53.918778  603921 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 12:00:53.918782  603921 kubeadm.go:319] 
	I1213 12:00:53.924880  603921 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 12:00:53.925344  603921 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 12:00:53.925460  603921 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 12:00:53.925729  603921 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 12:00:53.925740  603921 kubeadm.go:319] 
	I1213 12:00:53.925866  603921 kubeadm.go:403] duration metric: took 8m5.987919453s to StartCluster
	I1213 12:00:53.925907  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:00:53.925972  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:00:53.926107  603921 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 12:00:53.953173  603921 cri.go:89] found id: ""
	I1213 12:00:53.953257  603921 logs.go:282] 0 containers: []
	W1213 12:00:53.953275  603921 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:00:53.953283  603921 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:00:53.953363  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:00:53.984628  603921 cri.go:89] found id: ""
	I1213 12:00:53.984655  603921 logs.go:282] 0 containers: []
	W1213 12:00:53.984665  603921 logs.go:284] No container was found matching "etcd"
	I1213 12:00:53.984671  603921 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:00:53.984731  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:00:54.014942  603921 cri.go:89] found id: ""
	I1213 12:00:54.014969  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.014978  603921 logs.go:284] No container was found matching "coredns"
	I1213 12:00:54.014986  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:00:54.015045  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:00:54.064854  603921 cri.go:89] found id: ""
	I1213 12:00:54.064881  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.064890  603921 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:00:54.064897  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:00:54.064981  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:00:54.132162  603921 cri.go:89] found id: ""
	I1213 12:00:54.132187  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.132195  603921 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:00:54.132201  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:00:54.132311  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:00:54.159680  603921 cri.go:89] found id: ""
	I1213 12:00:54.159703  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.159712  603921 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:00:54.159718  603921 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:00:54.159779  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:00:54.185867  603921 cri.go:89] found id: ""
	I1213 12:00:54.185893  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.185902  603921 logs.go:284] No container was found matching "kindnet"
	I1213 12:00:54.185912  603921 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:00:54.185923  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:00:54.228270  603921 logs.go:123] Gathering logs for container status ...
	I1213 12:00:54.228303  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:00:54.257730  603921 logs.go:123] Gathering logs for kubelet ...
	I1213 12:00:54.257759  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:00:54.324854  603921 logs.go:123] Gathering logs for dmesg ...
	I1213 12:00:54.324892  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:00:54.342225  603921 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:00:54.342252  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:00:54.409722  603921 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:00:54.400901    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.401672    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403289    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403849    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.405570    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:00:54.400901    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.401672    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403289    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403849    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.405570    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:00:54.409752  603921 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00010332s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 12:00:54.409821  603921 out.go:285] * 
	W1213 12:00:54.410005  603921 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00010332s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 12:00:54.410026  603921 out.go:285] * 
	W1213 12:00:54.412399  603921 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 12:00:54.417573  603921 out.go:203] 
	W1213 12:00:54.420481  603921 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00010332s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 12:00:54.420529  603921 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 12:00:54.420553  603921 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 12:00:54.423665  603921 out.go:203] 
	I1213 12:01:07.773320  607523 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000195913s
	I1213 12:01:07.773347  607523 kubeadm.go:319] 
	I1213 12:01:07.773405  607523 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 12:01:07.773438  607523 kubeadm.go:319] 	- The kubelet is not running
	I1213 12:01:07.773542  607523 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 12:01:07.773547  607523 kubeadm.go:319] 
	I1213 12:01:07.773652  607523 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 12:01:07.773685  607523 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 12:01:07.773715  607523 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 12:01:07.773720  607523 kubeadm.go:319] 
	I1213 12:01:07.777876  607523 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 12:01:07.778275  607523 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 12:01:07.778377  607523 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 12:01:07.778624  607523 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 12:01:07.778630  607523 kubeadm.go:319] 
	I1213 12:01:07.778695  607523 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 12:01:07.778746  607523 kubeadm.go:403] duration metric: took 8m7.596100369s to StartCluster
	I1213 12:01:07.778786  607523 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:01:07.778843  607523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:01:07.814673  607523 cri.go:89] found id: ""
	I1213 12:01:07.814694  607523 logs.go:282] 0 containers: []
	W1213 12:01:07.814703  607523 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:01:07.814709  607523 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:01:07.814771  607523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:01:07.872169  607523 cri.go:89] found id: ""
	I1213 12:01:07.872191  607523 logs.go:282] 0 containers: []
	W1213 12:01:07.872199  607523 logs.go:284] No container was found matching "etcd"
	I1213 12:01:07.872205  607523 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:01:07.872262  607523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:01:07.897159  607523 cri.go:89] found id: ""
	I1213 12:01:07.897183  607523 logs.go:282] 0 containers: []
	W1213 12:01:07.897192  607523 logs.go:284] No container was found matching "coredns"
	I1213 12:01:07.897198  607523 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:01:07.897271  607523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:01:07.926240  607523 cri.go:89] found id: ""
	I1213 12:01:07.926266  607523 logs.go:282] 0 containers: []
	W1213 12:01:07.926275  607523 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:01:07.926285  607523 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:01:07.926342  607523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:01:07.954071  607523 cri.go:89] found id: ""
	I1213 12:01:07.954144  607523 logs.go:282] 0 containers: []
	W1213 12:01:07.954168  607523 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:01:07.954187  607523 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:01:07.954259  607523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:01:07.980272  607523 cri.go:89] found id: ""
	I1213 12:01:07.980300  607523 logs.go:282] 0 containers: []
	W1213 12:01:07.980310  607523 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:01:07.980316  607523 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:01:07.980371  607523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:01:08.011383  607523 cri.go:89] found id: ""
	I1213 12:01:08.011411  607523 logs.go:282] 0 containers: []
	W1213 12:01:08.011421  607523 logs.go:284] No container was found matching "kindnet"
	I1213 12:01:08.011431  607523 logs.go:123] Gathering logs for kubelet ...
	I1213 12:01:08.011442  607523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:01:08.079910  607523 logs.go:123] Gathering logs for dmesg ...
	I1213 12:01:08.079950  607523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:01:08.097373  607523 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:01:08.097401  607523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:01:08.160941  607523 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:01:08.153055    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.153840    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.155465    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.155845    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.157368    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:01:08.153055    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.153840    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.155465    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.155845    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.157368    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:01:08.161010  607523 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:01:08.161029  607523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:01:08.192670  607523 logs.go:123] Gathering logs for container status ...
	I1213 12:01:08.192707  607523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:01:08.220898  607523 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000195913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 12:01:08.220962  607523 out.go:285] * 
	W1213 12:01:08.221021  607523 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000195913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 12:01:08.221042  607523 out.go:285] * 
	W1213 12:01:08.223167  607523 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 12:01:08.228262  607523 out.go:203] 
	W1213 12:01:08.230390  607523 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000195913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 12:01:08.230436  607523 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 12:01:08.230456  607523 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 12:01:08.233619  607523 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 11:52:58 newest-cni-800979 crio[841]: time="2025-12-13T11:52:58.656750714Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 11:52:58 newest-cni-800979 crio[841]: time="2025-12-13T11:52:58.65679503Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 11:52:58 newest-cni-800979 crio[841]: time="2025-12-13T11:52:58.65686909Z" level=info msg="Create NRI interface"
	Dec 13 11:52:58 newest-cni-800979 crio[841]: time="2025-12-13T11:52:58.657011146Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 11:52:58 newest-cni-800979 crio[841]: time="2025-12-13T11:52:58.657027532Z" level=info msg="runtime interface created"
	Dec 13 11:52:58 newest-cni-800979 crio[841]: time="2025-12-13T11:52:58.65703938Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 11:52:58 newest-cni-800979 crio[841]: time="2025-12-13T11:52:58.657050458Z" level=info msg="runtime interface starting up..."
	Dec 13 11:52:58 newest-cni-800979 crio[841]: time="2025-12-13T11:52:58.657056603Z" level=info msg="starting plugins..."
	Dec 13 11:52:58 newest-cni-800979 crio[841]: time="2025-12-13T11:52:58.657071118Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 11:52:58 newest-cni-800979 crio[841]: time="2025-12-13T11:52:58.657157798Z" level=info msg="No systemd watchdog enabled"
	Dec 13 11:52:58 newest-cni-800979 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 11:53:00 newest-cni-800979 crio[841]: time="2025-12-13T11:53:00.658289681Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=44bde5a7-ef91-4bfc-b2de-9f916c14ea3c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:53:00 newest-cni-800979 crio[841]: time="2025-12-13T11:53:00.659003779Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=e31a0601-0e26-42cc-9404-dcdd39389cdb name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:53:00 newest-cni-800979 crio[841]: time="2025-12-13T11:53:00.65956494Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=ac873216-657d-4cc0-892e-00880e41eafa name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:53:00 newest-cni-800979 crio[841]: time="2025-12-13T11:53:00.65999591Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=530a0292-db1c-43ef-859f-467c374fb0aa name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:53:00 newest-cni-800979 crio[841]: time="2025-12-13T11:53:00.660429193Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=a9a7e7c8-e21b-4849-b310-763c391d55ad name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:53:00 newest-cni-800979 crio[841]: time="2025-12-13T11:53:00.660878797Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=091f4636-0f31-494b-b2e3-c60ba6c5537e name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:53:00 newest-cni-800979 crio[841]: time="2025-12-13T11:53:00.661381094Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=4d39732e-132e-4aab-83a7-bf35ce936d10 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:57:05 newest-cni-800979 crio[841]: time="2025-12-13T11:57:05.736588087Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=a83c8ed6-f494-4ce0-badc-348aab186d95 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:57:05 newest-cni-800979 crio[841]: time="2025-12-13T11:57:05.737233081Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=d6def276-17b4-41a9-b735-a72e840419d1 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:57:05 newest-cni-800979 crio[841]: time="2025-12-13T11:57:05.737730291Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=1dbb7a1b-99e5-4954-8a8f-33c6f4482cb4 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:57:05 newest-cni-800979 crio[841]: time="2025-12-13T11:57:05.738159423Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=a8c3fac2-6ffe-4483-991e-866f2c39acf7 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:57:05 newest-cni-800979 crio[841]: time="2025-12-13T11:57:05.738635382Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=b4e624e1-03da-40a9-aa60-b9cb1b62a27c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:57:05 newest-cni-800979 crio[841]: time="2025-12-13T11:57:05.739062077Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=43555a22-dfd3-4770-a890-ee016b44ec91 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:57:05 newest-cni-800979 crio[841]: time="2025-12-13T11:57:05.73979069Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=91be6437-ff89-42db-9528-f454720eb4de name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:01:09.380175    5014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:09.380749    5014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:09.382407    5014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:09.383066    5014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:09.384669    5014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	[Dec13 11:46] overlayfs: idmapped layers are currently not supported
	[ +24.639766] overlayfs: idmapped layers are currently not supported
	[ +18.732422] overlayfs: idmapped layers are currently not supported
	[Dec13 11:47] overlayfs: idmapped layers are currently not supported
	[Dec13 11:48] overlayfs: idmapped layers are currently not supported
	[Dec13 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.618483] overlayfs: idmapped layers are currently not supported
	[Dec13 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.749488] overlayfs: idmapped layers are currently not supported
	[Dec13 11:52] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 12:01:09 up  3:43,  0 user,  load average: 1.01, 1.02, 1.57
	Linux newest-cni-800979 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 12:01:07 newest-cni-800979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:01:07 newest-cni-800979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 639.
	Dec 13 12:01:07 newest-cni-800979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:01:07 newest-cni-800979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:01:07 newest-cni-800979 kubelet[4824]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:01:07 newest-cni-800979 kubelet[4824]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:01:07 newest-cni-800979 kubelet[4824]: E1213 12:01:07.851507    4824 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:01:07 newest-cni-800979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:01:07 newest-cni-800979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:01:08 newest-cni-800979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 640.
	Dec 13 12:01:08 newest-cni-800979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:01:08 newest-cni-800979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:01:08 newest-cni-800979 kubelet[4918]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:01:08 newest-cni-800979 kubelet[4918]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:01:08 newest-cni-800979 kubelet[4918]: E1213 12:01:08.606794    4918 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:01:08 newest-cni-800979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:01:08 newest-cni-800979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:01:09 newest-cni-800979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 641.
	Dec 13 12:01:09 newest-cni-800979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:01:09 newest-cni-800979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:01:09 newest-cni-800979 kubelet[5008]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:01:09 newest-cni-800979 kubelet[5008]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:01:09 newest-cni-800979 kubelet[5008]: E1213 12:01:09.360953    5008 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:01:09 newest-cni-800979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:01:09 newest-cni-800979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-800979 -n newest-cni-800979
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-800979 -n newest-cni-800979: exit status 6 (333.151076ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 12:01:09.871542  618536 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-800979" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-800979" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (505.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-307409 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-307409 create -f testdata/busybox.yaml: exit status 1 (49.764572ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-307409" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-307409 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-307409
helpers_test.go:244: (dbg) docker inspect no-preload-307409:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a",
	        "Created": "2025-12-13T11:52:23.357834479Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 604226,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:52:23.426122666Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/hosts",
	        "LogPath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a-json.log",
	        "Name": "/no-preload-307409",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-307409:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-307409",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a",
	                "LowerDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-307409",
	                "Source": "/var/lib/docker/volumes/no-preload-307409/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-307409",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-307409",
	                "name.minikube.sigs.k8s.io": "no-preload-307409",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3bbb75ba869ad4e24d065678acb24f13b332d42f86102a96ce228c9f56900de1",
	            "SandboxKey": "/var/run/docker/netns/3bbb75ba869a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-307409": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:08:52:80:ec:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "280e424abad6162e6fbaaf316b3c6095ab0d80a59a1f82eb556a84b2dd4f139a",
	                    "EndpointID": "fa43d8567fac17df2e79f566f84f62b5ae267b3a77d79f87cf8d10e233d98a54",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-307409",
	                        "9fe6186bf0c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-307409 -n no-preload-307409
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-307409 -n no-preload-307409: exit status 6 (350.955601ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 12:00:56.569674  617347 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-307409" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-307409 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p old-k8s-version-051699                                                                                                                                                                                                                            │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ delete  │ -p old-k8s-version-051699                                                                                                                                                                                                                            │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p cert-expiration-420007 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                            │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ delete  │ -p cert-expiration-420007                                                                                                                                                                                                                            │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-151605 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-151605 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-151605 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p embed-certs-326948 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │                     │
	│ stop    │ -p embed-certs-326948 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-326948 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:52 UTC │
	│ image   │ default-k8s-diff-port-151605 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p default-k8s-diff-port-151605 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p disable-driver-mounts-072590                                                                                                                                                                                                                      │ disable-driver-mounts-072590 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ image   │ embed-certs-326948 image list --format=json                                                                                                                                                                                                          │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p embed-certs-326948 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p embed-certs-326948                                                                                                                                                                                                                                │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p embed-certs-326948                                                                                                                                                                                                                                │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p newest-cni-800979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:52:44
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:52:44.222945  607523 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:52:44.223057  607523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:44.223099  607523 out.go:374] Setting ErrFile to fd 2...
	I1213 11:52:44.223106  607523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:44.223364  607523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:52:44.223812  607523 out.go:368] Setting JSON to false
	I1213 11:52:44.224724  607523 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12917,"bootTime":1765613848,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:52:44.224797  607523 start.go:143] virtualization:  
	I1213 11:52:44.228935  607523 out.go:179] * [newest-cni-800979] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:52:44.232087  607523 notify.go:221] Checking for updates...
	I1213 11:52:44.232862  607523 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:52:44.236046  607523 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:52:44.241086  607523 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:52:44.244482  607523 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:52:44.247343  607523 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:52:44.250267  607523 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:52:44.253709  607523 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:52:44.253853  607523 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:52:44.284666  607523 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:52:44.284774  607523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:52:44.401910  607523 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-13 11:52:44.38729859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:52:44.402031  607523 docker.go:319] overlay module found
	I1213 11:52:44.405585  607523 out.go:179] * Using the docker driver based on user configuration
	I1213 11:52:44.408428  607523 start.go:309] selected driver: docker
	I1213 11:52:44.408454  607523 start.go:927] validating driver "docker" against <nil>
	I1213 11:52:44.408468  607523 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:52:44.409713  607523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:52:44.548406  607523 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-13 11:52:44.53777287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:52:44.548555  607523 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 11:52:44.548581  607523 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 11:52:44.549476  607523 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 11:52:44.552258  607523 out.go:179] * Using Docker driver with root privileges
	I1213 11:52:44.555279  607523 cni.go:84] Creating CNI manager for ""
	I1213 11:52:44.555356  607523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:52:44.555365  607523 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 11:52:44.555448  607523 start.go:353] cluster config:
	{Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:52:44.558889  607523 out.go:179] * Starting "newest-cni-800979" primary control-plane node in "newest-cni-800979" cluster
	I1213 11:52:44.561893  607523 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:52:44.564946  607523 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:52:44.567939  607523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:44.568029  607523 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 11:52:44.568050  607523 cache.go:65] Caching tarball of preloaded images
	I1213 11:52:44.568145  607523 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 11:52:44.568156  607523 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 11:52:44.568295  607523 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json ...
	I1213 11:52:44.568315  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json: {Name:mkca051d0f4222f12ada2e542e9765aa1caaa1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:44.568460  607523 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:52:44.614235  607523 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:52:44.614511  607523 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:52:44.614568  607523 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:52:44.614617  607523 start.go:360] acquireMachinesLock for newest-cni-800979: {Name:mk98646479cdf6b123b7b6024833c6594650d415 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:44.614732  607523 start.go:364] duration metric: took 92.595µs to acquireMachinesLock for "newest-cni-800979"
	I1213 11:52:44.614763  607523 start.go:93] Provisioning new machine with config: &{Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:52:44.614850  607523 start.go:125] createHost starting for "" (driver="docker")
	I1213 11:52:43.447904  603921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.748996566s)
	I1213 11:52:43.447934  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1213 11:52:43.447952  603921 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1213 11:52:43.448001  603921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1213 11:52:44.178615  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1213 11:52:44.178655  603921 cache_images.go:125] Successfully loaded all cached images
	I1213 11:52:44.178662  603921 cache_images.go:94] duration metric: took 13.878753268s to LoadCachedImages
	I1213 11:52:44.178674  603921 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 11:52:44.178763  603921 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-307409 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:52:44.178851  603921 ssh_runner.go:195] Run: crio config
	I1213 11:52:44.242383  603921 cni.go:84] Creating CNI manager for ""
	I1213 11:52:44.242401  603921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:52:44.242418  603921 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:52:44.242441  603921 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-307409 NodeName:no-preload-307409 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:52:44.242555  603921 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-307409"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:52:44.242622  603921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:52:44.254521  603921 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1213 11:52:44.254582  603921 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:52:44.274613  603921 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1213 11:52:44.274705  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1213 11:52:44.275568  603921 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet
	I1213 11:52:44.278466  603921 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm
	I1213 11:52:44.279131  603921 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1213 11:52:44.279162  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1213 11:52:45.122331  603921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:52:45.166456  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1213 11:52:45.191725  603921 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1213 11:52:45.191781  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1213 11:52:45.304315  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1213 11:52:45.334054  603921 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1213 11:52:45.334112  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1213 11:52:46.015388  603921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:52:46.024888  603921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 11:52:46.040762  603921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:52:46.056856  603921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 11:52:46.080441  603921 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:52:46.084885  603921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:52:46.097815  603921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:52:46.230479  603921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:52:46.251958  603921 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409 for IP: 192.168.85.2
	I1213 11:52:46.251982  603921 certs.go:195] generating shared ca certs ...
	I1213 11:52:46.251998  603921 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:46.252212  603921 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:52:46.252287  603921 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:52:46.252302  603921 certs.go:257] generating profile certs ...
	I1213 11:52:46.252373  603921 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key
	I1213 11:52:46.252392  603921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.crt with IP's: []
	I1213 11:52:46.687159  603921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.crt ...
	I1213 11:52:46.687196  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.crt: {Name:mkd3b6de93eb4d0d7c38606e110ec8041a7a8b50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:46.687382  603921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key ...
	I1213 11:52:46.687530  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key: {Name:mk69f4e38edb3a6758b30b8919bec09ed6524780 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:46.687680  603921 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b
	I1213 11:52:46.687705  603921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 11:52:47.101196  603921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b ...
	I1213 11:52:47.101275  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b: {Name:mkf348306e6448fd779f0c40568bfbc2591db27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.101515  603921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b ...
	I1213 11:52:47.101554  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b: {Name:mk67006fcc87c7852dc9dd2baf2e5c091f89fb64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.101697  603921 certs.go:382] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt
	I1213 11:52:47.101816  603921 certs.go:386] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key
	I1213 11:52:47.101906  603921 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key
	I1213 11:52:47.101964  603921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt with IP's: []
	I1213 11:52:47.391626  603921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt ...
	I1213 11:52:47.391702  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt: {Name:mk6bf9ff3c46be8a69edc887a1d740e84c930536 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.391910  603921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key ...
	I1213 11:52:47.391946  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key: {Name:mk5282a1a4966c51394d6aeb663ae12cef8b3a1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.392186  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:52:47.392256  603921 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:52:47.392281  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:52:47.392345  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:52:47.392401  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:52:47.392449  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:52:47.392534  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:52:47.393177  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:52:47.413169  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:52:47.433634  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:52:47.456446  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:52:47.475453  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:52:47.495921  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 11:52:47.516359  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:52:47.533557  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:52:47.553686  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:52:47.576528  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:52:47.595023  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:52:47.617574  603921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:52:47.632766  603921 ssh_runner.go:195] Run: openssl version
	I1213 11:52:47.642255  603921 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.651062  603921 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:52:47.660280  603921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.665117  603921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.665212  603921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.711366  603921 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:52:47.719094  603921 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 11:52:47.727218  603921 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.735147  603921 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:52:47.743430  603921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.748386  603921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.748477  603921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.811036  603921 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:52:47.824172  603921 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/356328.pem /etc/ssl/certs/51391683.0
	I1213 11:52:47.833720  603921 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.842937  603921 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:52:47.852257  603921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.857336  603921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.857459  603921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.913987  603921 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:47.923742  603921 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3563282.pem /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:47.932105  603921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:52:47.937831  603921 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 11:52:47.937953  603921 kubeadm.go:401] StartCluster: {Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:52:47.938056  603921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:52:47.938131  603921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:52:47.977617  603921 cri.go:89] found id: ""
	I1213 11:52:47.977734  603921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:52:47.986677  603921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:52:47.995428  603921 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:52:47.995568  603921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:52:48.012929  603921 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:52:48.013001  603921 kubeadm.go:158] found existing configuration files:
	
	I1213 11:52:48.013078  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:52:48.023587  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:52:48.023720  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:52:48.033048  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:52:48.042898  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:52:48.043030  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:52:48.052336  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:52:48.062442  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:52:48.062560  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:52:48.071404  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:52:48.081302  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:52:48.081415  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:52:48.090412  603921 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:52:48.139895  603921 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:52:48.140310  603921 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:52:48.244346  603921 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:52:48.244445  603921 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:52:48.244514  603921 kubeadm.go:319] OS: Linux
	I1213 11:52:48.244581  603921 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:52:48.244649  603921 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:52:48.244717  603921 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:52:48.244785  603921 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:52:48.244849  603921 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:52:48.244917  603921 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:52:48.244983  603921 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:52:48.245052  603921 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:52:48.245113  603921 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:52:48.326956  603921 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:52:48.327125  603921 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:52:48.327254  603921 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:52:48.353781  603921 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:52:44.618660  607523 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 11:52:44.618986  607523 start.go:159] libmachine.API.Create for "newest-cni-800979" (driver="docker")
	I1213 11:52:44.619024  607523 client.go:173] LocalClient.Create starting
	I1213 11:52:44.619095  607523 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem
	I1213 11:52:44.619134  607523 main.go:143] libmachine: Decoding PEM data...
	I1213 11:52:44.619169  607523 main.go:143] libmachine: Parsing certificate...
	I1213 11:52:44.619234  607523 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem
	I1213 11:52:44.619259  607523 main.go:143] libmachine: Decoding PEM data...
	I1213 11:52:44.619275  607523 main.go:143] libmachine: Parsing certificate...
	I1213 11:52:44.619828  607523 cli_runner.go:164] Run: docker network inspect newest-cni-800979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 11:52:44.681886  607523 cli_runner.go:211] docker network inspect newest-cni-800979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 11:52:44.682019  607523 network_create.go:284] running [docker network inspect newest-cni-800979] to gather additional debugging logs...
	I1213 11:52:44.682044  607523 cli_runner.go:164] Run: docker network inspect newest-cni-800979
	W1213 11:52:44.783263  607523 cli_runner.go:211] docker network inspect newest-cni-800979 returned with exit code 1
	I1213 11:52:44.783303  607523 network_create.go:287] error running [docker network inspect newest-cni-800979]: docker network inspect newest-cni-800979: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-800979 not found
	I1213 11:52:44.783456  607523 network_create.go:289] output of [docker network inspect newest-cni-800979]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-800979 not found
	
	** /stderr **
	I1213 11:52:44.783853  607523 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:52:44.869365  607523 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0545902499c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:32:4c:cb:8d:7b} reservation:<nil>}
	I1213 11:52:44.869936  607523 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-de5fe2fbe3b8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:54:47:7f:e7:3a} reservation:<nil>}
	I1213 11:52:44.870324  607523 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7c96683190e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:0a:60:46:c5:4a} reservation:<nil>}
	I1213 11:52:44.872231  607523 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 11:52:44.872625  607523 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-280e424abad6 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:5e:ad:5b:52:ee:cb} reservation:<nil>}
	I1213 11:52:44.873100  607523 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a0a730}
	I1213 11:52:44.873121  607523 network_create.go:124] attempt to create docker network newest-cni-800979 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1213 11:52:44.873186  607523 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-800979 newest-cni-800979
	I1213 11:52:45.033952  607523 network_create.go:108] docker network newest-cni-800979 192.168.94.0/24 created
	I1213 11:52:45.033989  607523 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-800979" container
	I1213 11:52:45.034089  607523 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 11:52:45.110922  607523 cli_runner.go:164] Run: docker volume create newest-cni-800979 --label name.minikube.sigs.k8s.io=newest-cni-800979 --label created_by.minikube.sigs.k8s.io=true
	I1213 11:52:45.147181  607523 oci.go:103] Successfully created a docker volume newest-cni-800979
	I1213 11:52:45.148756  607523 cli_runner.go:164] Run: docker run --rm --name newest-cni-800979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800979 --entrypoint /usr/bin/test -v newest-cni-800979:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 11:52:46.576150  607523 cli_runner.go:217] Completed: docker run --rm --name newest-cni-800979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800979 --entrypoint /usr/bin/test -v newest-cni-800979:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.427287827s)
	I1213 11:52:46.576182  607523 oci.go:107] Successfully prepared a docker volume newest-cni-800979
	I1213 11:52:46.576222  607523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:46.576231  607523 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 11:52:46.576286  607523 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-800979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 11:52:48.362615  603921 out.go:252]   - Generating certificates and keys ...
	I1213 11:52:48.362749  603921 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:52:48.362861  603921 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:52:48.406340  603921 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:52:48.617898  603921 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:52:48.894950  603921 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:52:49.002897  603921 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 11:52:49.595632  603921 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 11:52:49.596022  603921 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 11:52:49.703067  603921 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 11:52:49.703500  603921 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 11:52:49.852748  603921 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 11:52:49.985441  603921 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 11:52:50.361702  603921 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 11:52:50.362007  603921 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:52:50.448441  603921 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:52:50.524868  603921 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:52:51.254957  603921 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:52:51.473347  603921 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:52:51.686418  603921 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:52:51.686517  603921 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:52:51.690277  603921 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:52:51.694117  603921 out.go:252]   - Booting up control plane ...
	I1213 11:52:51.694231  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:52:51.694310  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:52:51.695018  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:52:51.714016  603921 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:52:51.714689  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:52:51.728439  603921 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:52:51.728548  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:52:51.728589  603921 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:52:51.918802  603921 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:52:51.918928  603921 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:52:51.477960  607523 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-800979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.901639858s)
	I1213 11:52:51.478004  607523 kic.go:203] duration metric: took 4.901755297s to extract preloaded images to volume ...
	W1213 11:52:51.478154  607523 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 11:52:51.478257  607523 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 11:52:51.600099  607523 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-800979 --name newest-cni-800979 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800979 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-800979 --network newest-cni-800979 --ip 192.168.94.2 --volume newest-cni-800979:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 11:52:52.003446  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Running}}
	I1213 11:52:52.025630  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 11:52:52.044945  607523 cli_runner.go:164] Run: docker exec newest-cni-800979 stat /var/lib/dpkg/alternatives/iptables
	I1213 11:52:52.103780  607523 oci.go:144] the created container "newest-cni-800979" has a running status.
	I1213 11:52:52.103827  607523 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa...
	I1213 11:52:52.454986  607523 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 11:52:52.499855  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 11:52:52.520167  607523 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 11:52:52.520186  607523 kic_runner.go:114] Args: [docker exec --privileged newest-cni-800979 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 11:52:52.595209  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 11:52:52.616614  607523 machine.go:94] provisionDockerMachine start ...
	I1213 11:52:52.616710  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:52.645695  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:52.646054  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:52.646065  607523 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:52:52.646853  607523 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49104->127.0.0.1:33463: read: connection reset by peer
	I1213 11:52:55.795509  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-800979
	
	I1213 11:52:55.795546  607523 ubuntu.go:182] provisioning hostname "newest-cni-800979"
	I1213 11:52:55.795609  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:55.823768  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:55.824086  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:55.824105  607523 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-800979 && echo "newest-cni-800979" | sudo tee /etc/hostname
	I1213 11:52:55.984531  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-800979
	
	I1213 11:52:55.984627  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:56.004427  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:56.004789  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:56.004806  607523 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-800979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-800979/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-800979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:52:56.155779  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:52:56.155809  607523 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 11:52:56.155840  607523 ubuntu.go:190] setting up certificates
	I1213 11:52:56.155849  607523 provision.go:84] configureAuth start
	I1213 11:52:56.155916  607523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 11:52:56.173051  607523 provision.go:143] copyHostCerts
	I1213 11:52:56.173126  607523 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 11:52:56.173140  607523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 11:52:56.173218  607523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 11:52:56.173314  607523 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 11:52:56.173326  607523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 11:52:56.173354  607523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 11:52:56.173407  607523 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 11:52:56.173416  607523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 11:52:56.173440  607523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 11:52:56.173493  607523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.newest-cni-800979 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-800979]
	I1213 11:52:56.495741  607523 provision.go:177] copyRemoteCerts
	I1213 11:52:56.495819  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:52:56.495860  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:56.513776  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:56.623272  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:52:56.640893  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:52:56.658251  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:52:56.675898  607523 provision.go:87] duration metric: took 520.035144ms to configureAuth
	I1213 11:52:56.675924  607523 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:52:56.676119  607523 config.go:182] Loaded profile config "newest-cni-800979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:52:56.676229  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:56.693573  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:56.693885  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:56.693913  607523 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:52:57.000433  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:52:57.000459  607523 machine.go:97] duration metric: took 4.383824523s to provisionDockerMachine
	I1213 11:52:57.000471  607523 client.go:176] duration metric: took 12.381437402s to LocalClient.Create
	I1213 11:52:57.000485  607523 start.go:167] duration metric: took 12.381502329s to libmachine.API.Create "newest-cni-800979"
	I1213 11:52:57.000493  607523 start.go:293] postStartSetup for "newest-cni-800979" (driver="docker")
	I1213 11:52:57.000506  607523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:52:57.000573  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:52:57.000635  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.019654  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.123498  607523 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:52:57.126887  607523 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:52:57.126915  607523 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:52:57.126942  607523 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 11:52:57.127003  607523 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 11:52:57.127090  607523 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 11:52:57.127193  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:52:57.134628  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:52:57.153601  607523 start.go:296] duration metric: took 153.093637ms for postStartSetup
	I1213 11:52:57.154022  607523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 11:52:57.174170  607523 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json ...
	I1213 11:52:57.174465  607523 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:52:57.174516  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.191003  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.300652  607523 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:52:57.305941  607523 start.go:128] duration metric: took 12.691075107s to createHost
	I1213 11:52:57.305969  607523 start.go:83] releasing machines lock for "newest-cni-800979", held for 12.691222882s
	I1213 11:52:57.306067  607523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 11:52:57.324383  607523 ssh_runner.go:195] Run: cat /version.json
	I1213 11:52:57.324411  607523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:52:57.324436  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.324473  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.349379  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.349454  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.540188  607523 ssh_runner.go:195] Run: systemctl --version
	I1213 11:52:57.546743  607523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:52:57.581981  607523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:52:57.586210  607523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:52:57.586277  607523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:52:57.614440  607523 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 11:52:57.614460  607523 start.go:496] detecting cgroup driver to use...
	I1213 11:52:57.614492  607523 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:52:57.614539  607523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:52:57.632118  607523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:52:57.645277  607523 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:52:57.645361  607523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:52:57.663447  607523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:52:57.682384  607523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:52:57.805277  607523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:52:57.932514  607523 docker.go:234] disabling docker service ...
	I1213 11:52:57.932589  607523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:52:57.955202  607523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:52:57.968354  607523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:52:58.113128  607523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:52:58.247772  607523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:52:58.262298  607523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:52:58.277400  607523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 11:52:58.277526  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.287200  607523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:52:58.287335  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.296697  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.305672  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.315083  607523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:52:58.324248  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.333206  607523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.346564  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.355703  607523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:52:58.363253  607523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:52:58.370805  607523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:52:58.492125  607523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:52:58.663207  607523 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:52:58.663336  607523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:52:58.667219  607523 start.go:564] Will wait 60s for crictl version
	I1213 11:52:58.667334  607523 ssh_runner.go:195] Run: which crictl
	I1213 11:52:58.671116  607523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:52:58.697501  607523 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:52:58.697619  607523 ssh_runner.go:195] Run: crio --version
	I1213 11:52:58.733197  607523 ssh_runner.go:195] Run: crio --version
	I1213 11:52:58.768647  607523 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 11:52:58.771459  607523 cli_runner.go:164] Run: docker network inspect newest-cni-800979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:52:58.789274  607523 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1213 11:52:58.795116  607523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:52:58.812164  607523 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 11:52:58.814926  607523 kubeadm.go:884] updating cluster {Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:52:58.815100  607523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:58.815179  607523 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:52:58.855416  607523 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:52:58.855438  607523 crio.go:433] Images already preloaded, skipping extraction
	I1213 11:52:58.855493  607523 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:52:58.882823  607523 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:52:58.882846  607523 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:52:58.882855  607523 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 11:52:58.882940  607523 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-800979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:52:58.883028  607523 ssh_runner.go:195] Run: crio config
	I1213 11:52:58.937332  607523 cni.go:84] Creating CNI manager for ""
	I1213 11:52:58.937355  607523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:52:58.937377  607523 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 11:52:58.937402  607523 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-800979 NodeName:newest-cni-800979 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:52:58.937530  607523 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-800979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:52:58.937607  607523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:52:58.945256  607523 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:52:58.945332  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:52:58.952916  607523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 11:52:58.965421  607523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:52:58.978594  607523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1213 11:52:58.991343  607523 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:52:58.994981  607523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:52:59.006043  607523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:52:59.120731  607523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:52:59.136632  607523 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979 for IP: 192.168.94.2
	I1213 11:52:59.136650  607523 certs.go:195] generating shared ca certs ...
	I1213 11:52:59.136667  607523 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.136813  607523 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:52:59.136864  607523 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:52:59.136875  607523 certs.go:257] generating profile certs ...
	I1213 11:52:59.136930  607523 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.key
	I1213 11:52:59.136948  607523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.crt with IP's: []
	I1213 11:52:59.229537  607523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.crt ...
	I1213 11:52:59.229569  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.crt: {Name:mk69c62c6a65f19f1e9ae6f6006b84310e5ca69f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.229797  607523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.key ...
	I1213 11:52:59.229813  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.key: {Name:mk0d678e2df0ba46ea7a7d9db0beddac15d16cee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.229927  607523 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606
	I1213 11:52:59.229947  607523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1213 11:52:59.395722  607523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606 ...
	I1213 11:52:59.395753  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606: {Name:mk2f0d7037f2191b2fb310c8e6e39abce6919307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.395933  607523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606 ...
	I1213 11:52:59.395948  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606: {Name:mkeda4d05cf7f14a6919666348bb90fff24821e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.396035  607523 certs.go:382] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606 -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt
	I1213 11:52:59.396122  607523 certs.go:386] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606 -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key
	I1213 11:52:59.396187  607523 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key
	I1213 11:52:59.396205  607523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt with IP's: []
	I1213 11:52:59.677399  607523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt ...
	I1213 11:52:59.677431  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt: {Name:mk4f6f44ef9664fbc510805af3a0a5d8216b34d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.677617  607523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key ...
	I1213 11:52:59.677634  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key: {Name:mk08e1a717d212a6e36443fd4449253d4dfd4e34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.677867  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:52:59.677925  607523 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:52:59.677936  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:52:59.677963  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:52:59.677989  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:52:59.678018  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:52:59.678067  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:52:59.678646  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:52:59.697504  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:52:59.715937  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:52:59.733272  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:52:59.751842  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:52:59.769868  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:52:59.787032  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:52:59.804197  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:52:59.822307  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:52:59.840119  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:52:59.857580  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:52:59.875033  607523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:52:59.887226  607523 ssh_runner.go:195] Run: openssl version
	I1213 11:52:59.893568  607523 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.900683  607523 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:52:59.907927  607523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.911699  607523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.911785  607523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.952546  607523 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:59.959999  607523 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3563282.pem /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:59.967191  607523 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:59.974551  607523 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:52:59.981936  607523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:59.985667  607523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:59.985735  607523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:53:00.029636  607523 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:53:00.039949  607523 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 11:53:00.051259  607523 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.062203  607523 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:53:00.071922  607523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.077479  607523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.077644  607523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.129667  607523 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:53:00.145873  607523 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/356328.pem /etc/ssl/certs/51391683.0
	I1213 11:53:00.165719  607523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:53:00.182484  607523 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 11:53:00.182650  607523 kubeadm.go:401] StartCluster: {Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:53:00.191964  607523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:53:00.192781  607523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:53:00.308764  607523 cri.go:89] found id: ""
	I1213 11:53:00.308851  607523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:53:00.339801  607523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:53:00.369102  607523 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:53:00.369171  607523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:53:00.383298  607523 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:53:00.383367  607523 kubeadm.go:158] found existing configuration files:
	
	I1213 11:53:00.383424  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:53:00.395580  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:53:00.395656  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:53:00.405571  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:53:00.415778  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:53:00.415854  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:53:00.424800  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:53:00.434079  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:53:00.434162  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:53:00.443040  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:53:00.452144  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:53:00.452246  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:53:00.461542  607523 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:53:00.503183  607523 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:53:00.503307  607523 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:53:00.580961  607523 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:53:00.581064  607523 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:53:00.581117  607523 kubeadm.go:319] OS: Linux
	I1213 11:53:00.581167  607523 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:53:00.581226  607523 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:53:00.581277  607523 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:53:00.581327  607523 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:53:00.581379  607523 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:53:00.581429  607523 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:53:00.581478  607523 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:53:00.581529  607523 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:53:00.581581  607523 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:53:00.654422  607523 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:53:00.654539  607523 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:53:00.654635  607523 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:53:00.667854  607523 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:53:00.673949  607523 out.go:252]   - Generating certificates and keys ...
	I1213 11:53:00.674119  607523 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:53:00.674229  607523 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:53:00.749466  607523 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:53:00.853085  607523 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:53:01.087749  607523 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:53:01.312048  607523 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 11:53:01.513347  607523 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 11:53:01.513768  607523 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1213 11:53:01.838749  607523 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 11:53:01.839657  607523 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1213 11:53:02.478657  607523 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 11:53:02.876105  607523 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 11:53:03.010338  607523 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 11:53:03.010418  607523 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:53:03.200889  607523 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:53:03.653890  607523 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:53:04.344965  607523 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:53:04.580887  607523 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:53:04.785257  607523 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:53:04.787179  607523 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:53:04.796409  607523 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:53:04.799699  607523 out.go:252]   - Booting up control plane ...
	I1213 11:53:04.799829  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:53:04.799918  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:53:04.803001  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:53:04.836757  607523 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:53:04.837037  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:53:04.849469  607523 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:53:04.850109  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:53:04.853862  607523 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:53:05.015188  607523 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:53:05.015326  607523 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:56:51.920072  603921 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001224221s
	I1213 11:56:51.920104  603921 kubeadm.go:319] 
	I1213 11:56:51.920212  603921 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:56:51.920270  603921 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:56:51.920608  603921 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:56:51.920619  603921 kubeadm.go:319] 
	I1213 11:56:51.920812  603921 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:56:51.920869  603921 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:56:51.921157  603921 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:56:51.921165  603921 kubeadm.go:319] 
	I1213 11:56:51.925513  603921 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:56:51.926006  603921 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:56:51.926180  603921 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:56:51.926479  603921 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:56:51.926517  603921 kubeadm.go:319] 
	W1213 11:56:51.926771  603921 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001224221s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 11:56:51.926983  603921 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 11:56:51.927241  603921 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:56:52.337349  603921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:56:52.355756  603921 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:56:52.355865  603921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:56:52.364798  603921 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:56:52.364819  603921 kubeadm.go:158] found existing configuration files:
	
	I1213 11:56:52.364872  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:56:52.373016  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:56:52.373085  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:56:52.380868  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:56:52.388839  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:56:52.388908  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:56:52.396493  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:56:52.404428  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:56:52.404492  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:56:52.412543  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:56:52.420710  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:56:52.420784  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:56:52.428931  603921 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:56:52.469486  603921 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:56:52.469812  603921 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:56:52.544538  603921 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:56:52.544634  603921 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:56:52.544691  603921 kubeadm.go:319] OS: Linux
	I1213 11:56:52.544758  603921 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:56:52.544826  603921 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:56:52.544893  603921 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:56:52.544959  603921 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:56:52.545027  603921 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:56:52.545094  603921 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:56:52.545159  603921 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:56:52.545225  603921 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:56:52.545290  603921 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:56:52.613010  603921 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:56:52.613120  603921 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:56:52.613213  603921 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:56:52.631911  603921 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:56:52.635687  603921 out.go:252]   - Generating certificates and keys ...
	I1213 11:56:52.635862  603921 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:56:52.635952  603921 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:56:52.636046  603921 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 11:56:52.636157  603921 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 11:56:52.636251  603921 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 11:56:52.636343  603921 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 11:56:52.636411  603921 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 11:56:52.636489  603921 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 11:56:52.636569  603921 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 11:56:52.636650  603921 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 11:56:52.636696  603921 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 11:56:52.636757  603921 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:56:52.776698  603921 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:56:52.958761  603921 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:56:53.117866  603921 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:56:53.292950  603921 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:56:53.736752  603921 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:56:53.737374  603921 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:56:53.739900  603921 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:56:53.743260  603921 out.go:252]   - Booting up control plane ...
	I1213 11:56:53.743409  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:56:53.743561  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:56:53.743673  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:56:53.757211  603921 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:56:53.757338  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:56:53.765875  603921 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:56:53.766984  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:56:53.767070  603921 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:56:53.918187  603921 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:56:53.918313  603921 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:57:05.013826  607523 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000267538s
	I1213 11:57:05.013870  607523 kubeadm.go:319] 
	I1213 11:57:05.013935  607523 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:57:05.013971  607523 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:57:05.014088  607523 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:57:05.014096  607523 kubeadm.go:319] 
	I1213 11:57:05.014210  607523 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:57:05.014246  607523 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:57:05.014279  607523 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:57:05.014287  607523 kubeadm.go:319] 
	I1213 11:57:05.020057  607523 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:57:05.020490  607523 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:57:05.020604  607523 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:57:05.020844  607523 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:57:05.020856  607523 kubeadm.go:319] 
	I1213 11:57:05.020925  607523 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 11:57:05.021047  607523 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000267538s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 11:57:05.021134  607523 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 11:57:05.432952  607523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:57:05.445933  607523 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:57:05.446023  607523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:57:05.454556  607523 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:57:05.454578  607523 kubeadm.go:158] found existing configuration files:
	
	I1213 11:57:05.454629  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:57:05.462597  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:57:05.462670  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:57:05.470456  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:57:05.478316  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:57:05.478382  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:57:05.485947  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:57:05.494252  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:57:05.494320  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:57:05.502133  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:57:05.510237  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:57:05.510311  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:57:05.518001  607523 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:57:05.584840  607523 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:57:05.585142  607523 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:57:05.657959  607523 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:57:05.658125  607523 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:57:05.658198  607523 kubeadm.go:319] OS: Linux
	I1213 11:57:05.658288  607523 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:57:05.658378  607523 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:57:05.658471  607523 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:57:05.658558  607523 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:57:05.658635  607523 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:57:05.658730  607523 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:57:05.658813  607523 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:57:05.658915  607523 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:57:05.659000  607523 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:57:05.731597  607523 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:57:05.731775  607523 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:57:05.731903  607523 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:57:05.740855  607523 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:57:05.744423  607523 out.go:252]   - Generating certificates and keys ...
	I1213 11:57:05.744578  607523 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:57:05.744679  607523 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:57:05.744796  607523 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 11:57:05.744887  607523 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 11:57:05.744992  607523 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 11:57:05.745076  607523 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 11:57:05.745170  607523 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 11:57:05.745499  607523 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 11:57:05.745582  607523 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 11:57:05.745655  607523 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 11:57:05.745694  607523 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 11:57:05.745749  607523 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:57:05.913677  607523 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:57:06.384962  607523 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:57:07.036559  607523 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:57:07.437110  607523 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:57:07.602655  607523 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:57:07.603483  607523 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:57:07.607251  607523 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:57:07.612344  607523 out.go:252]   - Booting up control plane ...
	I1213 11:57:07.612453  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:57:07.612542  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:57:07.612663  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:57:07.626734  607523 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:57:07.627071  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:57:07.634285  607523 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:57:07.634609  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:57:07.634655  607523 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:57:07.773578  607523 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:57:07.773700  607523 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 12:00:53.918383  603921 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00010332s
	I1213 12:00:53.918411  603921 kubeadm.go:319] 
	I1213 12:00:53.918468  603921 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 12:00:53.918502  603921 kubeadm.go:319] 	- The kubelet is not running
	I1213 12:00:53.918607  603921 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 12:00:53.918611  603921 kubeadm.go:319] 
	I1213 12:00:53.918715  603921 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 12:00:53.918747  603921 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 12:00:53.918778  603921 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 12:00:53.918782  603921 kubeadm.go:319] 
	I1213 12:00:53.924880  603921 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 12:00:53.925344  603921 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 12:00:53.925460  603921 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 12:00:53.925729  603921 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 12:00:53.925740  603921 kubeadm.go:319] 
	I1213 12:00:53.925866  603921 kubeadm.go:403] duration metric: took 8m5.987919453s to StartCluster
	I1213 12:00:53.925907  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:00:53.925972  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:00:53.926107  603921 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 12:00:53.953173  603921 cri.go:89] found id: ""
	I1213 12:00:53.953257  603921 logs.go:282] 0 containers: []
	W1213 12:00:53.953275  603921 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:00:53.953283  603921 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:00:53.953363  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:00:53.984628  603921 cri.go:89] found id: ""
	I1213 12:00:53.984655  603921 logs.go:282] 0 containers: []
	W1213 12:00:53.984665  603921 logs.go:284] No container was found matching "etcd"
	I1213 12:00:53.984671  603921 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:00:53.984731  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:00:54.014942  603921 cri.go:89] found id: ""
	I1213 12:00:54.014969  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.014978  603921 logs.go:284] No container was found matching "coredns"
	I1213 12:00:54.014986  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:00:54.015045  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:00:54.064854  603921 cri.go:89] found id: ""
	I1213 12:00:54.064881  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.064890  603921 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:00:54.064897  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:00:54.064981  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:00:54.132162  603921 cri.go:89] found id: ""
	I1213 12:00:54.132187  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.132195  603921 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:00:54.132201  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:00:54.132311  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:00:54.159680  603921 cri.go:89] found id: ""
	I1213 12:00:54.159703  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.159712  603921 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:00:54.159718  603921 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:00:54.159779  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:00:54.185867  603921 cri.go:89] found id: ""
	I1213 12:00:54.185893  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.185902  603921 logs.go:284] No container was found matching "kindnet"
	I1213 12:00:54.185912  603921 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:00:54.185923  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:00:54.228270  603921 logs.go:123] Gathering logs for container status ...
	I1213 12:00:54.228303  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:00:54.257730  603921 logs.go:123] Gathering logs for kubelet ...
	I1213 12:00:54.257759  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:00:54.324854  603921 logs.go:123] Gathering logs for dmesg ...
	I1213 12:00:54.324892  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:00:54.342225  603921 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:00:54.342252  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:00:54.409722  603921 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:00:54.400901    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.401672    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403289    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403849    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.405570    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:00:54.400901    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.401672    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403289    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403849    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.405570    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:00:54.409752  603921 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00010332s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 12:00:54.409821  603921 out.go:285] * 
	W1213 12:00:54.410005  603921 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00010332s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 12:00:54.410026  603921 out.go:285] * 
	W1213 12:00:54.412399  603921 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 12:00:54.417573  603921 out.go:203] 
	W1213 12:00:54.420481  603921 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00010332s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 12:00:54.420529  603921 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 12:00:54.420553  603921 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 12:00:54.423665  603921 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 11:52:31 no-preload-307409 crio[836]: time="2025-12-13T11:52:31.116588744Z" level=info msg="Image registry.k8s.io/kube-scheduler:v1.35.0-beta.0 not found" id=003f9cb8-ef73-477c-9f7e-cd7904ad42ea name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:31 no-preload-307409 crio[836]: time="2025-12-13T11:52:31.116681922Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-scheduler:v1.35.0-beta.0 found" id=003f9cb8-ef73-477c-9f7e-cd7904ad42ea name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:31 no-preload-307409 crio[836]: time="2025-12-13T11:52:31.779768303Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4462a0b2-6e23-4130-823a-3449eee15424 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:31 no-preload-307409 crio[836]: time="2025-12-13T11:52:31.779939299Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=4462a0b2-6e23-4130-823a-3449eee15424 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:31 no-preload-307409 crio[836]: time="2025-12-13T11:52:31.779997318Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=4462a0b2-6e23-4130-823a-3449eee15424 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:32 no-preload-307409 crio[836]: time="2025-12-13T11:52:32.117107034Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=089154b9-cbe2-4530-82d0-0b41da643c1c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:32 no-preload-307409 crio[836]: time="2025-12-13T11:52:32.11758611Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=089154b9-cbe2-4530-82d0-0b41da643c1c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:32 no-preload-307409 crio[836]: time="2025-12-13T11:52:32.117646903Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=089154b9-cbe2-4530-82d0-0b41da643c1c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:34 no-preload-307409 crio[836]: time="2025-12-13T11:52:34.342232553Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7365090d-a9c7-46f6-8c3c-dc876c1ffcf6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:34 no-preload-307409 crio[836]: time="2025-12-13T11:52:34.342586722Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=7365090d-a9c7-46f6-8c3c-dc876c1ffcf6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:34 no-preload-307409 crio[836]: time="2025-12-13T11:52:34.342639301Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=7365090d-a9c7-46f6-8c3c-dc876c1ffcf6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.33182054Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=43635d89-3bd4-44c2-825f-c8431c65dc6f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.335082522Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=b9aa7c65-27ab-4115-8617-40478e0c4431 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.336915661Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=ba139078-fdf0-4392-91a6-145cf5852d50 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.338604774Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=827565fb-635d-461a-bd67-b5ae5370ff66 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.339721074Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0529a105-853e-48a9-a6a2-0f2cc8e7d4de name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.344733068Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=d292bb3c-e44b-4d74-9c47-e804425ec1f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.347983735Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5383fa2b-ffc4-4de0-8c1f-994389259392 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.616112342Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3027b62a-b474-4ce9-a79a-b73a049c156c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.61769885Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=085f7430-a688-461e-929e-a810830d4d26 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.619174448Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=dcae434f-7a2a-45da-aecd-fe682d69c75c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.620679297Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=9c972166-33b8-4e43-8eb0-69fa78d92d4d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.621515325Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=f406858a-9da8-4255-acef-b33ba48d16bf name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.622872825Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=378a3dd0-9334-4c41-946c-b18ffb0ce982 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.62375966Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=685e82ba-5807-4b97-bc6c-0036cf58fa30 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:00:57.264393    5851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:57.264982    5851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:57.266695    5851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:57.267144    5851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:57.268778    5851 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	[Dec13 11:46] overlayfs: idmapped layers are currently not supported
	[ +24.639766] overlayfs: idmapped layers are currently not supported
	[ +18.732422] overlayfs: idmapped layers are currently not supported
	[Dec13 11:47] overlayfs: idmapped layers are currently not supported
	[Dec13 11:48] overlayfs: idmapped layers are currently not supported
	[Dec13 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.618483] overlayfs: idmapped layers are currently not supported
	[Dec13 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.749488] overlayfs: idmapped layers are currently not supported
	[Dec13 11:52] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 12:00:57 up  3:43,  0 user,  load average: 0.90, 1.00, 1.58
	Linux no-preload-307409 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 12:00:54 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:00:55 no-preload-307409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 648.
	Dec 13 12:00:55 no-preload-307409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:00:55 no-preload-307409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:00:55 no-preload-307409 kubelet[5712]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:00:55 no-preload-307409 kubelet[5712]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:00:55 no-preload-307409 kubelet[5712]: E1213 12:00:55.630680    5712 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:00:55 no-preload-307409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:00:55 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:00:56 no-preload-307409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 649.
	Dec 13 12:00:56 no-preload-307409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:00:56 no-preload-307409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:00:56 no-preload-307409 kubelet[5750]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:00:56 no-preload-307409 kubelet[5750]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:00:56 no-preload-307409 kubelet[5750]: E1213 12:00:56.393051    5750 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:00:56 no-preload-307409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:00:56 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:00:57 no-preload-307409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 650.
	Dec 13 12:00:57 no-preload-307409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:00:57 no-preload-307409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:00:57 no-preload-307409 kubelet[5815]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:00:57 no-preload-307409 kubelet[5815]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:00:57 no-preload-307409 kubelet[5815]: E1213 12:00:57.127886    5815 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:00:57 no-preload-307409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:00:57 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-307409 -n no-preload-307409
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-307409 -n no-preload-307409: exit status 6 (333.904711ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 12:00:57.741329  617578 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-307409" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-307409" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-307409
helpers_test.go:244: (dbg) docker inspect no-preload-307409:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a",
	        "Created": "2025-12-13T11:52:23.357834479Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 604226,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:52:23.426122666Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/hosts",
	        "LogPath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a-json.log",
	        "Name": "/no-preload-307409",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-307409:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-307409",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a",
	                "LowerDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-307409",
	                "Source": "/var/lib/docker/volumes/no-preload-307409/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-307409",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-307409",
	                "name.minikube.sigs.k8s.io": "no-preload-307409",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3bbb75ba869ad4e24d065678acb24f13b332d42f86102a96ce228c9f56900de1",
	            "SandboxKey": "/var/run/docker/netns/3bbb75ba869a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-307409": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:08:52:80:ec:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "280e424abad6162e6fbaaf316b3c6095ab0d80a59a1f82eb556a84b2dd4f139a",
	                    "EndpointID": "fa43d8567fac17df2e79f566f84f62b5ae267b3a77d79f87cf8d10e233d98a54",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-307409",
	                        "9fe6186bf0c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-307409 -n no-preload-307409
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-307409 -n no-preload-307409: exit status 6 (347.89914ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 12:00:58.121330  617665 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-307409" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-307409 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p old-k8s-version-051699                                                                                                                                                                                                                            │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ delete  │ -p old-k8s-version-051699                                                                                                                                                                                                                            │ old-k8s-version-051699       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:49 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p cert-expiration-420007 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                            │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ delete  │ -p cert-expiration-420007                                                                                                                                                                                                                            │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-151605 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-151605 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-151605 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p embed-certs-326948 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │                     │
	│ stop    │ -p embed-certs-326948 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-326948 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:52 UTC │
	│ image   │ default-k8s-diff-port-151605 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p default-k8s-diff-port-151605 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p disable-driver-mounts-072590                                                                                                                                                                                                                      │ disable-driver-mounts-072590 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ image   │ embed-certs-326948 image list --format=json                                                                                                                                                                                                          │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p embed-certs-326948 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p embed-certs-326948                                                                                                                                                                                                                                │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p embed-certs-326948                                                                                                                                                                                                                                │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p newest-cni-800979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:52:44
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:52:44.222945  607523 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:52:44.223057  607523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:44.223099  607523 out.go:374] Setting ErrFile to fd 2...
	I1213 11:52:44.223106  607523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:44.223364  607523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:52:44.223812  607523 out.go:368] Setting JSON to false
	I1213 11:52:44.224724  607523 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12917,"bootTime":1765613848,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:52:44.224797  607523 start.go:143] virtualization:  
	I1213 11:52:44.228935  607523 out.go:179] * [newest-cni-800979] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:52:44.232087  607523 notify.go:221] Checking for updates...
	I1213 11:52:44.232862  607523 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:52:44.236046  607523 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:52:44.241086  607523 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:52:44.244482  607523 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:52:44.247343  607523 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:52:44.250267  607523 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:52:44.253709  607523 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:52:44.253853  607523 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:52:44.284666  607523 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:52:44.284774  607523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:52:44.401910  607523 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-13 11:52:44.38729859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:52:44.402031  607523 docker.go:319] overlay module found
	I1213 11:52:44.405585  607523 out.go:179] * Using the docker driver based on user configuration
	I1213 11:52:44.408428  607523 start.go:309] selected driver: docker
	I1213 11:52:44.408454  607523 start.go:927] validating driver "docker" against <nil>
	I1213 11:52:44.408468  607523 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:52:44.409713  607523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:52:44.548406  607523 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-13 11:52:44.53777287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:52:44.548555  607523 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 11:52:44.548581  607523 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 11:52:44.549476  607523 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 11:52:44.552258  607523 out.go:179] * Using Docker driver with root privileges
	I1213 11:52:44.555279  607523 cni.go:84] Creating CNI manager for ""
	I1213 11:52:44.555356  607523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:52:44.555365  607523 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 11:52:44.555448  607523 start.go:353] cluster config:
	{Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:52:44.558889  607523 out.go:179] * Starting "newest-cni-800979" primary control-plane node in "newest-cni-800979" cluster
	I1213 11:52:44.561893  607523 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:52:44.564946  607523 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:52:44.567939  607523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:44.568029  607523 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 11:52:44.568050  607523 cache.go:65] Caching tarball of preloaded images
	I1213 11:52:44.568145  607523 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 11:52:44.568156  607523 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 11:52:44.568295  607523 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json ...
	I1213 11:52:44.568315  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json: {Name:mkca051d0f4222f12ada2e542e9765aa1caaa1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:44.568460  607523 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:52:44.614235  607523 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:52:44.614511  607523 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:52:44.614568  607523 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:52:44.614617  607523 start.go:360] acquireMachinesLock for newest-cni-800979: {Name:mk98646479cdf6b123b7b6024833c6594650d415 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:44.614732  607523 start.go:364] duration metric: took 92.595µs to acquireMachinesLock for "newest-cni-800979"
	I1213 11:52:44.614763  607523 start.go:93] Provisioning new machine with config: &{Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:52:44.614850  607523 start.go:125] createHost starting for "" (driver="docker")
	I1213 11:52:43.447904  603921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.748996566s)
	I1213 11:52:43.447934  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1213 11:52:43.447952  603921 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1213 11:52:43.448001  603921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1213 11:52:44.178615  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1213 11:52:44.178655  603921 cache_images.go:125] Successfully loaded all cached images
	I1213 11:52:44.178662  603921 cache_images.go:94] duration metric: took 13.878753268s to LoadCachedImages
	I1213 11:52:44.178674  603921 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 11:52:44.178763  603921 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-307409 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:52:44.178851  603921 ssh_runner.go:195] Run: crio config
	I1213 11:52:44.242383  603921 cni.go:84] Creating CNI manager for ""
	I1213 11:52:44.242401  603921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:52:44.242418  603921 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:52:44.242441  603921 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-307409 NodeName:no-preload-307409 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:52:44.242555  603921 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-307409"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:52:44.242622  603921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:52:44.254521  603921 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1213 11:52:44.254582  603921 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:52:44.274613  603921 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1213 11:52:44.274705  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1213 11:52:44.275568  603921 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet
	I1213 11:52:44.278466  603921 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm
	I1213 11:52:44.279131  603921 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1213 11:52:44.279162  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1213 11:52:45.122331  603921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:52:45.166456  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1213 11:52:45.191725  603921 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1213 11:52:45.191781  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1213 11:52:45.304315  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1213 11:52:45.334054  603921 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1213 11:52:45.334112  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1213 11:52:46.015388  603921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:52:46.024888  603921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 11:52:46.040762  603921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:52:46.056856  603921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 11:52:46.080441  603921 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:52:46.084885  603921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:52:46.097815  603921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:52:46.230479  603921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:52:46.251958  603921 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409 for IP: 192.168.85.2
	I1213 11:52:46.251982  603921 certs.go:195] generating shared ca certs ...
	I1213 11:52:46.251998  603921 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:46.252212  603921 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:52:46.252287  603921 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:52:46.252302  603921 certs.go:257] generating profile certs ...
	I1213 11:52:46.252373  603921 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key
	I1213 11:52:46.252392  603921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.crt with IP's: []
	I1213 11:52:46.687159  603921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.crt ...
	I1213 11:52:46.687196  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.crt: {Name:mkd3b6de93eb4d0d7c38606e110ec8041a7a8b50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:46.687382  603921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key ...
	I1213 11:52:46.687530  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key: {Name:mk69f4e38edb3a6758b30b8919bec09ed6524780 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:46.687680  603921 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b
	I1213 11:52:46.687705  603921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 11:52:47.101196  603921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b ...
	I1213 11:52:47.101275  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b: {Name:mkf348306e6448fd779f0c40568bfbc2591db27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.101515  603921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b ...
	I1213 11:52:47.101554  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b: {Name:mk67006fcc87c7852dc9dd2baf2e5c091f89fb64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.101697  603921 certs.go:382] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt
	I1213 11:52:47.101816  603921 certs.go:386] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key
	I1213 11:52:47.101906  603921 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key
	I1213 11:52:47.101964  603921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt with IP's: []
	I1213 11:52:47.391626  603921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt ...
	I1213 11:52:47.391702  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt: {Name:mk6bf9ff3c46be8a69edc887a1d740e84c930536 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.391910  603921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key ...
	I1213 11:52:47.391946  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key: {Name:mk5282a1a4966c51394d6aeb663ae12cef8b3a1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.392186  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:52:47.392256  603921 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:52:47.392281  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:52:47.392345  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:52:47.392401  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:52:47.392449  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:52:47.392534  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:52:47.393177  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:52:47.413169  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:52:47.433634  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:52:47.456446  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:52:47.475453  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:52:47.495921  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 11:52:47.516359  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:52:47.533557  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:52:47.553686  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:52:47.576528  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:52:47.595023  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:52:47.617574  603921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:52:47.632766  603921 ssh_runner.go:195] Run: openssl version
	I1213 11:52:47.642255  603921 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.651062  603921 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:52:47.660280  603921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.665117  603921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.665212  603921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.711366  603921 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:52:47.719094  603921 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 11:52:47.727218  603921 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.735147  603921 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:52:47.743430  603921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.748386  603921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.748477  603921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.811036  603921 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:52:47.824172  603921 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/356328.pem /etc/ssl/certs/51391683.0
	I1213 11:52:47.833720  603921 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.842937  603921 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:52:47.852257  603921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.857336  603921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.857459  603921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.913987  603921 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:47.923742  603921 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3563282.pem /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:47.932105  603921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:52:47.937831  603921 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 11:52:47.937953  603921 kubeadm.go:401] StartCluster: {Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:52:47.938056  603921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:52:47.938131  603921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:52:47.977617  603921 cri.go:89] found id: ""
	I1213 11:52:47.977734  603921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:52:47.986677  603921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:52:47.995428  603921 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:52:47.995568  603921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:52:48.012929  603921 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:52:48.013001  603921 kubeadm.go:158] found existing configuration files:
	
	I1213 11:52:48.013078  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:52:48.023587  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:52:48.023720  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:52:48.033048  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:52:48.042898  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:52:48.043030  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:52:48.052336  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:52:48.062442  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:52:48.062560  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:52:48.071404  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:52:48.081302  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:52:48.081415  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:52:48.090412  603921 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:52:48.139895  603921 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:52:48.140310  603921 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:52:48.244346  603921 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:52:48.244445  603921 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:52:48.244514  603921 kubeadm.go:319] OS: Linux
	I1213 11:52:48.244581  603921 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:52:48.244649  603921 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:52:48.244717  603921 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:52:48.244785  603921 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:52:48.244849  603921 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:52:48.244917  603921 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:52:48.244983  603921 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:52:48.245052  603921 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:52:48.245113  603921 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:52:48.326956  603921 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:52:48.327125  603921 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:52:48.327254  603921 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:52:48.353781  603921 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:52:44.618660  607523 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 11:52:44.618986  607523 start.go:159] libmachine.API.Create for "newest-cni-800979" (driver="docker")
	I1213 11:52:44.619024  607523 client.go:173] LocalClient.Create starting
	I1213 11:52:44.619095  607523 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem
	I1213 11:52:44.619134  607523 main.go:143] libmachine: Decoding PEM data...
	I1213 11:52:44.619169  607523 main.go:143] libmachine: Parsing certificate...
	I1213 11:52:44.619234  607523 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem
	I1213 11:52:44.619259  607523 main.go:143] libmachine: Decoding PEM data...
	I1213 11:52:44.619275  607523 main.go:143] libmachine: Parsing certificate...
	I1213 11:52:44.619828  607523 cli_runner.go:164] Run: docker network inspect newest-cni-800979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 11:52:44.681886  607523 cli_runner.go:211] docker network inspect newest-cni-800979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 11:52:44.682019  607523 network_create.go:284] running [docker network inspect newest-cni-800979] to gather additional debugging logs...
	I1213 11:52:44.682044  607523 cli_runner.go:164] Run: docker network inspect newest-cni-800979
	W1213 11:52:44.783263  607523 cli_runner.go:211] docker network inspect newest-cni-800979 returned with exit code 1
	I1213 11:52:44.783303  607523 network_create.go:287] error running [docker network inspect newest-cni-800979]: docker network inspect newest-cni-800979: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-800979 not found
	I1213 11:52:44.783456  607523 network_create.go:289] output of [docker network inspect newest-cni-800979]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-800979 not found
	
	** /stderr **
	I1213 11:52:44.783853  607523 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:52:44.869365  607523 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0545902499c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:32:4c:cb:8d:7b} reservation:<nil>}
	I1213 11:52:44.869936  607523 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-de5fe2fbe3b8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:54:47:7f:e7:3a} reservation:<nil>}
	I1213 11:52:44.870324  607523 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7c96683190e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:0a:60:46:c5:4a} reservation:<nil>}
	I1213 11:52:44.872231  607523 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 11:52:44.872625  607523 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-280e424abad6 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:5e:ad:5b:52:ee:cb} reservation:<nil>}
	I1213 11:52:44.873100  607523 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a0a730}
	I1213 11:52:44.873121  607523 network_create.go:124] attempt to create docker network newest-cni-800979 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1213 11:52:44.873186  607523 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-800979 newest-cni-800979
	I1213 11:52:45.033952  607523 network_create.go:108] docker network newest-cni-800979 192.168.94.0/24 created
	I1213 11:52:45.033989  607523 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-800979" container
	I1213 11:52:45.034089  607523 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 11:52:45.110922  607523 cli_runner.go:164] Run: docker volume create newest-cni-800979 --label name.minikube.sigs.k8s.io=newest-cni-800979 --label created_by.minikube.sigs.k8s.io=true
	I1213 11:52:45.147181  607523 oci.go:103] Successfully created a docker volume newest-cni-800979
	I1213 11:52:45.148756  607523 cli_runner.go:164] Run: docker run --rm --name newest-cni-800979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800979 --entrypoint /usr/bin/test -v newest-cni-800979:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 11:52:46.576150  607523 cli_runner.go:217] Completed: docker run --rm --name newest-cni-800979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800979 --entrypoint /usr/bin/test -v newest-cni-800979:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.427287827s)
	I1213 11:52:46.576182  607523 oci.go:107] Successfully prepared a docker volume newest-cni-800979
	I1213 11:52:46.576222  607523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:46.576231  607523 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 11:52:46.576286  607523 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-800979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 11:52:48.362615  603921 out.go:252]   - Generating certificates and keys ...
	I1213 11:52:48.362749  603921 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:52:48.362861  603921 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:52:48.406340  603921 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:52:48.617898  603921 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:52:48.894950  603921 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:52:49.002897  603921 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 11:52:49.595632  603921 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 11:52:49.596022  603921 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 11:52:49.703067  603921 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 11:52:49.703500  603921 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 11:52:49.852748  603921 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 11:52:49.985441  603921 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 11:52:50.361702  603921 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 11:52:50.362007  603921 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:52:50.448441  603921 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:52:50.524868  603921 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:52:51.254957  603921 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:52:51.473347  603921 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:52:51.686418  603921 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:52:51.686517  603921 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:52:51.690277  603921 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:52:51.694117  603921 out.go:252]   - Booting up control plane ...
	I1213 11:52:51.694231  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:52:51.694310  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:52:51.695018  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:52:51.714016  603921 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:52:51.714689  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:52:51.728439  603921 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:52:51.728548  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:52:51.728589  603921 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:52:51.918802  603921 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:52:51.918928  603921 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:52:51.477960  607523 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-800979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.901639858s)
	I1213 11:52:51.478004  607523 kic.go:203] duration metric: took 4.901755297s to extract preloaded images to volume ...
	W1213 11:52:51.478154  607523 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 11:52:51.478257  607523 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 11:52:51.600099  607523 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-800979 --name newest-cni-800979 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800979 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-800979 --network newest-cni-800979 --ip 192.168.94.2 --volume newest-cni-800979:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 11:52:52.003446  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Running}}
	I1213 11:52:52.025630  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 11:52:52.044945  607523 cli_runner.go:164] Run: docker exec newest-cni-800979 stat /var/lib/dpkg/alternatives/iptables
	I1213 11:52:52.103780  607523 oci.go:144] the created container "newest-cni-800979" has a running status.
	I1213 11:52:52.103827  607523 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa...
	I1213 11:52:52.454986  607523 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 11:52:52.499855  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 11:52:52.520167  607523 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 11:52:52.520186  607523 kic_runner.go:114] Args: [docker exec --privileged newest-cni-800979 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 11:52:52.595209  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 11:52:52.616614  607523 machine.go:94] provisionDockerMachine start ...
	I1213 11:52:52.616710  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:52.645695  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:52.646054  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:52.646065  607523 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:52:52.646853  607523 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49104->127.0.0.1:33463: read: connection reset by peer
	I1213 11:52:55.795509  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-800979
	
	I1213 11:52:55.795546  607523 ubuntu.go:182] provisioning hostname "newest-cni-800979"
	I1213 11:52:55.795609  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:55.823768  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:55.824086  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:55.824105  607523 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-800979 && echo "newest-cni-800979" | sudo tee /etc/hostname
	I1213 11:52:55.984531  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-800979
	
	I1213 11:52:55.984627  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:56.004427  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:56.004789  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:56.004806  607523 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-800979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-800979/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-800979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:52:56.155779  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:52:56.155809  607523 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 11:52:56.155840  607523 ubuntu.go:190] setting up certificates
	I1213 11:52:56.155849  607523 provision.go:84] configureAuth start
	I1213 11:52:56.155916  607523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 11:52:56.173051  607523 provision.go:143] copyHostCerts
	I1213 11:52:56.173126  607523 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 11:52:56.173140  607523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 11:52:56.173218  607523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 11:52:56.173314  607523 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 11:52:56.173326  607523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 11:52:56.173354  607523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 11:52:56.173407  607523 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 11:52:56.173416  607523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 11:52:56.173440  607523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 11:52:56.173493  607523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.newest-cni-800979 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-800979]
	I1213 11:52:56.495741  607523 provision.go:177] copyRemoteCerts
	I1213 11:52:56.495819  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:52:56.495860  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:56.513776  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:56.623272  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:52:56.640893  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:52:56.658251  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:52:56.675898  607523 provision.go:87] duration metric: took 520.035144ms to configureAuth
	I1213 11:52:56.675924  607523 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:52:56.676119  607523 config.go:182] Loaded profile config "newest-cni-800979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:52:56.676229  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:56.693573  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:56.693885  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:56.693913  607523 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:52:57.000433  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:52:57.000459  607523 machine.go:97] duration metric: took 4.383824523s to provisionDockerMachine
	I1213 11:52:57.000471  607523 client.go:176] duration metric: took 12.381437402s to LocalClient.Create
	I1213 11:52:57.000485  607523 start.go:167] duration metric: took 12.381502329s to libmachine.API.Create "newest-cni-800979"
	I1213 11:52:57.000493  607523 start.go:293] postStartSetup for "newest-cni-800979" (driver="docker")
	I1213 11:52:57.000506  607523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:52:57.000573  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:52:57.000635  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.019654  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.123498  607523 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:52:57.126887  607523 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:52:57.126915  607523 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:52:57.126942  607523 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 11:52:57.127003  607523 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 11:52:57.127090  607523 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 11:52:57.127193  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:52:57.134628  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:52:57.153601  607523 start.go:296] duration metric: took 153.093637ms for postStartSetup
	I1213 11:52:57.154022  607523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 11:52:57.174170  607523 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json ...
	I1213 11:52:57.174465  607523 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:52:57.174516  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.191003  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.300652  607523 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:52:57.305941  607523 start.go:128] duration metric: took 12.691075107s to createHost
	I1213 11:52:57.305969  607523 start.go:83] releasing machines lock for "newest-cni-800979", held for 12.691222882s
	I1213 11:52:57.306067  607523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 11:52:57.324383  607523 ssh_runner.go:195] Run: cat /version.json
	I1213 11:52:57.324411  607523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:52:57.324436  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.324473  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.349379  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.349454  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.540188  607523 ssh_runner.go:195] Run: systemctl --version
	I1213 11:52:57.546743  607523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:52:57.581981  607523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:52:57.586210  607523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:52:57.586277  607523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:52:57.614440  607523 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 11:52:57.614460  607523 start.go:496] detecting cgroup driver to use...
	I1213 11:52:57.614492  607523 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:52:57.614539  607523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:52:57.632118  607523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:52:57.645277  607523 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:52:57.645361  607523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:52:57.663447  607523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:52:57.682384  607523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:52:57.805277  607523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:52:57.932514  607523 docker.go:234] disabling docker service ...
	I1213 11:52:57.932589  607523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:52:57.955202  607523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:52:57.968354  607523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:52:58.113128  607523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:52:58.247772  607523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:52:58.262298  607523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:52:58.277400  607523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 11:52:58.277526  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.287200  607523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:52:58.287335  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.296697  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.305672  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.315083  607523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:52:58.324248  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.333206  607523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.346564  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.355703  607523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:52:58.363253  607523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:52:58.370805  607523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:52:58.492125  607523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:52:58.663207  607523 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:52:58.663336  607523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:52:58.667219  607523 start.go:564] Will wait 60s for crictl version
	I1213 11:52:58.667334  607523 ssh_runner.go:195] Run: which crictl
	I1213 11:52:58.671116  607523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:52:58.697501  607523 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:52:58.697619  607523 ssh_runner.go:195] Run: crio --version
	I1213 11:52:58.733197  607523 ssh_runner.go:195] Run: crio --version
	I1213 11:52:58.768647  607523 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 11:52:58.771459  607523 cli_runner.go:164] Run: docker network inspect newest-cni-800979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:52:58.789274  607523 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1213 11:52:58.795116  607523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:52:58.812164  607523 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 11:52:58.814926  607523 kubeadm.go:884] updating cluster {Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:52:58.815100  607523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:58.815179  607523 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:52:58.855416  607523 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:52:58.855438  607523 crio.go:433] Images already preloaded, skipping extraction
	I1213 11:52:58.855493  607523 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:52:58.882823  607523 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:52:58.882846  607523 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:52:58.882855  607523 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 11:52:58.882940  607523 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-800979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:52:58.883028  607523 ssh_runner.go:195] Run: crio config
	I1213 11:52:58.937332  607523 cni.go:84] Creating CNI manager for ""
	I1213 11:52:58.937355  607523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:52:58.937377  607523 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 11:52:58.937402  607523 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-800979 NodeName:newest-cni-800979 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:52:58.937530  607523 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-800979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:52:58.937607  607523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:52:58.945256  607523 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:52:58.945332  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:52:58.952916  607523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 11:52:58.965421  607523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:52:58.978594  607523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1213 11:52:58.991343  607523 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:52:58.994981  607523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:52:59.006043  607523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:52:59.120731  607523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:52:59.136632  607523 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979 for IP: 192.168.94.2
	I1213 11:52:59.136650  607523 certs.go:195] generating shared ca certs ...
	I1213 11:52:59.136667  607523 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.136813  607523 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:52:59.136864  607523 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:52:59.136875  607523 certs.go:257] generating profile certs ...
	I1213 11:52:59.136930  607523 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.key
	I1213 11:52:59.136948  607523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.crt with IP's: []
	I1213 11:52:59.229537  607523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.crt ...
	I1213 11:52:59.229569  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.crt: {Name:mk69c62c6a65f19f1e9ae6f6006b84310e5ca69f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.229797  607523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.key ...
	I1213 11:52:59.229813  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.key: {Name:mk0d678e2df0ba46ea7a7d9db0beddac15d16cee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.229927  607523 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606
	I1213 11:52:59.229947  607523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1213 11:52:59.395722  607523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606 ...
	I1213 11:52:59.395753  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606: {Name:mk2f0d7037f2191b2fb310c8e6e39abce6919307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.395933  607523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606 ...
	I1213 11:52:59.395948  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606: {Name:mkeda4d05cf7f14a6919666348bb90fff24821e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.396035  607523 certs.go:382] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606 -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt
	I1213 11:52:59.396122  607523 certs.go:386] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606 -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key
	I1213 11:52:59.396187  607523 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key
	I1213 11:52:59.396205  607523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt with IP's: []
	I1213 11:52:59.677399  607523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt ...
	I1213 11:52:59.677431  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt: {Name:mk4f6f44ef9664fbc510805af3a0a5d8216b34d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.677617  607523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key ...
	I1213 11:52:59.677634  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key: {Name:mk08e1a717d212a6e36443fd4449253d4dfd4e34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.677867  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:52:59.677925  607523 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:52:59.677936  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:52:59.677963  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:52:59.677989  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:52:59.678018  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:52:59.678067  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:52:59.678646  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:52:59.697504  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:52:59.715937  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:52:59.733272  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:52:59.751842  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:52:59.769868  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:52:59.787032  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:52:59.804197  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:52:59.822307  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:52:59.840119  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:52:59.857580  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:52:59.875033  607523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:52:59.887226  607523 ssh_runner.go:195] Run: openssl version
	I1213 11:52:59.893568  607523 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.900683  607523 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:52:59.907927  607523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.911699  607523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.911785  607523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.952546  607523 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:59.959999  607523 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3563282.pem /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:59.967191  607523 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:59.974551  607523 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:52:59.981936  607523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:59.985667  607523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:59.985735  607523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:53:00.029636  607523 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:53:00.039949  607523 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 11:53:00.051259  607523 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.062203  607523 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:53:00.071922  607523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.077479  607523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.077644  607523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.129667  607523 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:53:00.145873  607523 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/356328.pem /etc/ssl/certs/51391683.0
	I1213 11:53:00.165719  607523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:53:00.182484  607523 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 11:53:00.182650  607523 kubeadm.go:401] StartCluster: {Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:53:00.191964  607523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:53:00.192781  607523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:53:00.308764  607523 cri.go:89] found id: ""
	I1213 11:53:00.308851  607523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:53:00.339801  607523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:53:00.369102  607523 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:53:00.369171  607523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:53:00.383298  607523 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:53:00.383367  607523 kubeadm.go:158] found existing configuration files:
	
	I1213 11:53:00.383424  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:53:00.395580  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:53:00.395656  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:53:00.405571  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:53:00.415778  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:53:00.415854  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:53:00.424800  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:53:00.434079  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:53:00.434162  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:53:00.443040  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:53:00.452144  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:53:00.452246  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:53:00.461542  607523 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:53:00.503183  607523 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:53:00.503307  607523 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:53:00.580961  607523 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:53:00.581064  607523 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:53:00.581117  607523 kubeadm.go:319] OS: Linux
	I1213 11:53:00.581167  607523 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:53:00.581226  607523 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:53:00.581277  607523 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:53:00.581327  607523 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:53:00.581379  607523 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:53:00.581429  607523 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:53:00.581478  607523 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:53:00.581529  607523 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:53:00.581581  607523 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:53:00.654422  607523 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:53:00.654539  607523 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:53:00.654635  607523 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:53:00.667854  607523 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:53:00.673949  607523 out.go:252]   - Generating certificates and keys ...
	I1213 11:53:00.674119  607523 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:53:00.674229  607523 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:53:00.749466  607523 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:53:00.853085  607523 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:53:01.087749  607523 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:53:01.312048  607523 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 11:53:01.513347  607523 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 11:53:01.513768  607523 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1213 11:53:01.838749  607523 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 11:53:01.839657  607523 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1213 11:53:02.478657  607523 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 11:53:02.876105  607523 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 11:53:03.010338  607523 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 11:53:03.010418  607523 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:53:03.200889  607523 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:53:03.653890  607523 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:53:04.344965  607523 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:53:04.580887  607523 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:53:04.785257  607523 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:53:04.787179  607523 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:53:04.796409  607523 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:53:04.799699  607523 out.go:252]   - Booting up control plane ...
	I1213 11:53:04.799829  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:53:04.799918  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:53:04.803001  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:53:04.836757  607523 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:53:04.837037  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:53:04.849469  607523 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:53:04.850109  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:53:04.853862  607523 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:53:05.015188  607523 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:53:05.015326  607523 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:56:51.920072  603921 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001224221s
	I1213 11:56:51.920104  603921 kubeadm.go:319] 
	I1213 11:56:51.920212  603921 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:56:51.920270  603921 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:56:51.920608  603921 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:56:51.920619  603921 kubeadm.go:319] 
	I1213 11:56:51.920812  603921 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:56:51.920869  603921 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:56:51.921157  603921 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:56:51.921165  603921 kubeadm.go:319] 
	I1213 11:56:51.925513  603921 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:56:51.926006  603921 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:56:51.926180  603921 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:56:51.926479  603921 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:56:51.926517  603921 kubeadm.go:319] 
	W1213 11:56:51.926771  603921 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001224221s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 11:56:51.926983  603921 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 11:56:51.927241  603921 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:56:52.337349  603921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:56:52.355756  603921 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:56:52.355865  603921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:56:52.364798  603921 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:56:52.364819  603921 kubeadm.go:158] found existing configuration files:
	
	I1213 11:56:52.364872  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:56:52.373016  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:56:52.373085  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:56:52.380868  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:56:52.388839  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:56:52.388908  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:56:52.396493  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:56:52.404428  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:56:52.404492  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:56:52.412543  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:56:52.420710  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:56:52.420784  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:56:52.428931  603921 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:56:52.469486  603921 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:56:52.469812  603921 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:56:52.544538  603921 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:56:52.544634  603921 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:56:52.544691  603921 kubeadm.go:319] OS: Linux
	I1213 11:56:52.544758  603921 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:56:52.544826  603921 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:56:52.544893  603921 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:56:52.544959  603921 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:56:52.545027  603921 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:56:52.545094  603921 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:56:52.545159  603921 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:56:52.545225  603921 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:56:52.545290  603921 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:56:52.613010  603921 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:56:52.613120  603921 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:56:52.613213  603921 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:56:52.631911  603921 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:56:52.635687  603921 out.go:252]   - Generating certificates and keys ...
	I1213 11:56:52.635862  603921 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:56:52.635952  603921 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:56:52.636046  603921 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 11:56:52.636157  603921 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 11:56:52.636251  603921 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 11:56:52.636343  603921 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 11:56:52.636411  603921 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 11:56:52.636489  603921 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 11:56:52.636569  603921 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 11:56:52.636650  603921 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 11:56:52.636696  603921 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 11:56:52.636757  603921 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:56:52.776698  603921 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:56:52.958761  603921 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:56:53.117866  603921 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:56:53.292950  603921 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:56:53.736752  603921 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:56:53.737374  603921 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:56:53.739900  603921 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:56:53.743260  603921 out.go:252]   - Booting up control plane ...
	I1213 11:56:53.743409  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:56:53.743561  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:56:53.743673  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:56:53.757211  603921 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:56:53.757338  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:56:53.765875  603921 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:56:53.766984  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:56:53.767070  603921 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:56:53.918187  603921 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:56:53.918313  603921 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:57:05.013826  607523 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000267538s
	I1213 11:57:05.013870  607523 kubeadm.go:319] 
	I1213 11:57:05.013935  607523 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:57:05.013971  607523 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:57:05.014088  607523 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:57:05.014096  607523 kubeadm.go:319] 
	I1213 11:57:05.014210  607523 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:57:05.014246  607523 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:57:05.014279  607523 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:57:05.014287  607523 kubeadm.go:319] 
	I1213 11:57:05.020057  607523 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:57:05.020490  607523 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:57:05.020604  607523 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:57:05.020844  607523 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:57:05.020856  607523 kubeadm.go:319] 
	I1213 11:57:05.020925  607523 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 11:57:05.021047  607523 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000267538s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 11:57:05.021134  607523 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 11:57:05.432952  607523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:57:05.445933  607523 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:57:05.446023  607523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:57:05.454556  607523 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:57:05.454578  607523 kubeadm.go:158] found existing configuration files:
	
	I1213 11:57:05.454629  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:57:05.462597  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:57:05.462670  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:57:05.470456  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:57:05.478316  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:57:05.478382  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:57:05.485947  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:57:05.494252  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:57:05.494320  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:57:05.502133  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:57:05.510237  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:57:05.510311  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:57:05.518001  607523 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:57:05.584840  607523 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:57:05.585142  607523 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:57:05.657959  607523 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:57:05.658125  607523 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:57:05.658198  607523 kubeadm.go:319] OS: Linux
	I1213 11:57:05.658288  607523 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:57:05.658378  607523 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:57:05.658471  607523 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:57:05.658558  607523 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:57:05.658635  607523 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:57:05.658730  607523 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:57:05.658813  607523 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:57:05.658915  607523 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:57:05.659000  607523 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:57:05.731597  607523 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:57:05.731775  607523 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:57:05.731903  607523 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:57:05.740855  607523 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:57:05.744423  607523 out.go:252]   - Generating certificates and keys ...
	I1213 11:57:05.744578  607523 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:57:05.744679  607523 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:57:05.744796  607523 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 11:57:05.744887  607523 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 11:57:05.744992  607523 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 11:57:05.745076  607523 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 11:57:05.745170  607523 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 11:57:05.745499  607523 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 11:57:05.745582  607523 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 11:57:05.745655  607523 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 11:57:05.745694  607523 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 11:57:05.745749  607523 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:57:05.913677  607523 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:57:06.384962  607523 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:57:07.036559  607523 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:57:07.437110  607523 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:57:07.602655  607523 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:57:07.603483  607523 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:57:07.607251  607523 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:57:07.612344  607523 out.go:252]   - Booting up control plane ...
	I1213 11:57:07.612453  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:57:07.612542  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:57:07.612663  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:57:07.626734  607523 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:57:07.627071  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:57:07.634285  607523 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:57:07.634609  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:57:07.634655  607523 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:57:07.773578  607523 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:57:07.773700  607523 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 12:00:53.918383  603921 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00010332s
	I1213 12:00:53.918411  603921 kubeadm.go:319] 
	I1213 12:00:53.918468  603921 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 12:00:53.918502  603921 kubeadm.go:319] 	- The kubelet is not running
	I1213 12:00:53.918607  603921 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 12:00:53.918611  603921 kubeadm.go:319] 
	I1213 12:00:53.918715  603921 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 12:00:53.918747  603921 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 12:00:53.918778  603921 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 12:00:53.918782  603921 kubeadm.go:319] 
	I1213 12:00:53.924880  603921 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 12:00:53.925344  603921 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 12:00:53.925460  603921 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 12:00:53.925729  603921 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 12:00:53.925740  603921 kubeadm.go:319] 
	I1213 12:00:53.925866  603921 kubeadm.go:403] duration metric: took 8m5.987919453s to StartCluster
	I1213 12:00:53.925907  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:00:53.925972  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:00:53.926107  603921 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 12:00:53.953173  603921 cri.go:89] found id: ""
	I1213 12:00:53.953257  603921 logs.go:282] 0 containers: []
	W1213 12:00:53.953275  603921 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:00:53.953283  603921 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:00:53.953363  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:00:53.984628  603921 cri.go:89] found id: ""
	I1213 12:00:53.984655  603921 logs.go:282] 0 containers: []
	W1213 12:00:53.984665  603921 logs.go:284] No container was found matching "etcd"
	I1213 12:00:53.984671  603921 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:00:53.984731  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:00:54.014942  603921 cri.go:89] found id: ""
	I1213 12:00:54.014969  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.014978  603921 logs.go:284] No container was found matching "coredns"
	I1213 12:00:54.014986  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:00:54.015045  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:00:54.064854  603921 cri.go:89] found id: ""
	I1213 12:00:54.064881  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.064890  603921 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:00:54.064897  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:00:54.064981  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:00:54.132162  603921 cri.go:89] found id: ""
	I1213 12:00:54.132187  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.132195  603921 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:00:54.132201  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:00:54.132311  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:00:54.159680  603921 cri.go:89] found id: ""
	I1213 12:00:54.159703  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.159712  603921 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:00:54.159718  603921 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:00:54.159779  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:00:54.185867  603921 cri.go:89] found id: ""
	I1213 12:00:54.185893  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.185902  603921 logs.go:284] No container was found matching "kindnet"
	I1213 12:00:54.185912  603921 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:00:54.185923  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:00:54.228270  603921 logs.go:123] Gathering logs for container status ...
	I1213 12:00:54.228303  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:00:54.257730  603921 logs.go:123] Gathering logs for kubelet ...
	I1213 12:00:54.257759  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:00:54.324854  603921 logs.go:123] Gathering logs for dmesg ...
	I1213 12:00:54.324892  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:00:54.342225  603921 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:00:54.342252  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:00:54.409722  603921 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:00:54.400901    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.401672    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403289    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403849    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.405570    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:00:54.400901    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.401672    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403289    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403849    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.405570    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:00:54.409752  603921 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00010332s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 12:00:54.409821  603921 out.go:285] * 
	W1213 12:00:54.410005  603921 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00010332s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 12:00:54.410026  603921 out.go:285] * 
	W1213 12:00:54.412399  603921 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 12:00:54.417573  603921 out.go:203] 
	W1213 12:00:54.420481  603921 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00010332s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 12:00:54.420529  603921 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 12:00:54.420553  603921 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 12:00:54.423665  603921 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 11:52:31 no-preload-307409 crio[836]: time="2025-12-13T11:52:31.116588744Z" level=info msg="Image registry.k8s.io/kube-scheduler:v1.35.0-beta.0 not found" id=003f9cb8-ef73-477c-9f7e-cd7904ad42ea name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:31 no-preload-307409 crio[836]: time="2025-12-13T11:52:31.116681922Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-scheduler:v1.35.0-beta.0 found" id=003f9cb8-ef73-477c-9f7e-cd7904ad42ea name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:31 no-preload-307409 crio[836]: time="2025-12-13T11:52:31.779768303Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4462a0b2-6e23-4130-823a-3449eee15424 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:31 no-preload-307409 crio[836]: time="2025-12-13T11:52:31.779939299Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=4462a0b2-6e23-4130-823a-3449eee15424 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:31 no-preload-307409 crio[836]: time="2025-12-13T11:52:31.779997318Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=4462a0b2-6e23-4130-823a-3449eee15424 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:32 no-preload-307409 crio[836]: time="2025-12-13T11:52:32.117107034Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=089154b9-cbe2-4530-82d0-0b41da643c1c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:32 no-preload-307409 crio[836]: time="2025-12-13T11:52:32.11758611Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=089154b9-cbe2-4530-82d0-0b41da643c1c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:32 no-preload-307409 crio[836]: time="2025-12-13T11:52:32.117646903Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=089154b9-cbe2-4530-82d0-0b41da643c1c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:34 no-preload-307409 crio[836]: time="2025-12-13T11:52:34.342232553Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7365090d-a9c7-46f6-8c3c-dc876c1ffcf6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:34 no-preload-307409 crio[836]: time="2025-12-13T11:52:34.342586722Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=7365090d-a9c7-46f6-8c3c-dc876c1ffcf6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:34 no-preload-307409 crio[836]: time="2025-12-13T11:52:34.342639301Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=7365090d-a9c7-46f6-8c3c-dc876c1ffcf6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.33182054Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=43635d89-3bd4-44c2-825f-c8431c65dc6f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.335082522Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=b9aa7c65-27ab-4115-8617-40478e0c4431 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.336915661Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=ba139078-fdf0-4392-91a6-145cf5852d50 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.338604774Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=827565fb-635d-461a-bd67-b5ae5370ff66 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.339721074Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0529a105-853e-48a9-a6a2-0f2cc8e7d4de name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.344733068Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=d292bb3c-e44b-4d74-9c47-e804425ec1f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.347983735Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5383fa2b-ffc4-4de0-8c1f-994389259392 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.616112342Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3027b62a-b474-4ce9-a79a-b73a049c156c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.61769885Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=085f7430-a688-461e-929e-a810830d4d26 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.619174448Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=dcae434f-7a2a-45da-aecd-fe682d69c75c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.620679297Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=9c972166-33b8-4e43-8eb0-69fa78d92d4d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.621515325Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=f406858a-9da8-4255-acef-b33ba48d16bf name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.622872825Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=378a3dd0-9334-4c41-946c-b18ffb0ce982 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.62375966Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=685e82ba-5807-4b97-bc6c-0036cf58fa30 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:00:58.807871    5982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:58.808575    5982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:58.810306    5982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:58.810908    5982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:58.812492    5982 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	[Dec13 11:46] overlayfs: idmapped layers are currently not supported
	[ +24.639766] overlayfs: idmapped layers are currently not supported
	[ +18.732422] overlayfs: idmapped layers are currently not supported
	[Dec13 11:47] overlayfs: idmapped layers are currently not supported
	[Dec13 11:48] overlayfs: idmapped layers are currently not supported
	[Dec13 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.618483] overlayfs: idmapped layers are currently not supported
	[Dec13 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.749488] overlayfs: idmapped layers are currently not supported
	[Dec13 11:52] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 12:00:58 up  3:43,  0 user,  load average: 0.90, 1.00, 1.58
	Linux no-preload-307409 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 12:00:56 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:00:57 no-preload-307409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 650.
	Dec 13 12:00:57 no-preload-307409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:00:57 no-preload-307409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:00:57 no-preload-307409 kubelet[5815]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:00:57 no-preload-307409 kubelet[5815]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:00:57 no-preload-307409 kubelet[5815]: E1213 12:00:57.127886    5815 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:00:57 no-preload-307409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:00:57 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:00:57 no-preload-307409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 651.
	Dec 13 12:00:57 no-preload-307409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:00:57 no-preload-307409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:00:57 no-preload-307409 kubelet[5880]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:00:57 no-preload-307409 kubelet[5880]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:00:57 no-preload-307409 kubelet[5880]: E1213 12:00:57.892126    5880 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:00:57 no-preload-307409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:00:57 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:00:58 no-preload-307409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 652.
	Dec 13 12:00:58 no-preload-307409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:00:58 no-preload-307409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:00:58 no-preload-307409 kubelet[5929]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:00:58 no-preload-307409 kubelet[5929]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:00:58 no-preload-307409 kubelet[5929]: E1213 12:00:58.615090    5929 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:00:58 no-preload-307409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:00:58 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-307409 -n no-preload-307409
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-307409 -n no-preload-307409: exit status 6 (403.827951ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 12:00:59.341374  617883 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-307409" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-307409" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (3.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (122.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-307409 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-307409 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (2m0.149606179s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-307409 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-307409 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-307409 describe deploy/metrics-server -n kube-system: exit status 1 (73.339917ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-307409" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-307409 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-307409
helpers_test.go:244: (dbg) docker inspect no-preload-307409:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a",
	        "Created": "2025-12-13T11:52:23.357834479Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 604226,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:52:23.426122666Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/hosts",
	        "LogPath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a-json.log",
	        "Name": "/no-preload-307409",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-307409:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-307409",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a",
	                "LowerDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-307409",
	                "Source": "/var/lib/docker/volumes/no-preload-307409/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-307409",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-307409",
	                "name.minikube.sigs.k8s.io": "no-preload-307409",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3bbb75ba869ad4e24d065678acb24f13b332d42f86102a96ce228c9f56900de1",
	            "SandboxKey": "/var/run/docker/netns/3bbb75ba869a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-307409": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:08:52:80:ec:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "280e424abad6162e6fbaaf316b3c6095ab0d80a59a1f82eb556a84b2dd4f139a",
	                    "EndpointID": "fa43d8567fac17df2e79f566f84f62b5ae267b3a77d79f87cf8d10e233d98a54",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-307409",
	                        "9fe6186bf0c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-307409 -n no-preload-307409
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-307409 -n no-preload-307409: exit status 6 (383.210906ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 12:02:59.975409  622328 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-307409" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-307409 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-307409 logs -n 25: (1.118442122s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-151605 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-151605 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-151605 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p embed-certs-326948 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │                     │
	│ stop    │ -p embed-certs-326948 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-326948 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:52 UTC │
	│ image   │ default-k8s-diff-port-151605 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p default-k8s-diff-port-151605 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p disable-driver-mounts-072590                                                                                                                                                                                                                      │ disable-driver-mounts-072590 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ image   │ embed-certs-326948 image list --format=json                                                                                                                                                                                                          │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p embed-certs-326948 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p embed-certs-326948                                                                                                                                                                                                                                │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p embed-certs-326948                                                                                                                                                                                                                                │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p newest-cni-800979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-307409 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:00 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-800979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │                     │
	│ stop    │ -p newest-cni-800979 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:02 UTC │ 13 Dec 25 12:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-800979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:02 UTC │ 13 Dec 25 12:02 UTC │
	│ start   │ -p newest-cni-800979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:02 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 12:02:49
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 12:02:49.233504  620795 out.go:360] Setting OutFile to fd 1 ...
	I1213 12:02:49.233644  620795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:02:49.233655  620795 out.go:374] Setting ErrFile to fd 2...
	I1213 12:02:49.233660  620795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:02:49.233910  620795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 12:02:49.234294  620795 out.go:368] Setting JSON to false
	I1213 12:02:49.235159  620795 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13522,"bootTime":1765613848,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 12:02:49.235231  620795 start.go:143] virtualization:  
	I1213 12:02:49.240415  620795 out.go:179] * [newest-cni-800979] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 12:02:49.243444  620795 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 12:02:49.243505  620795 notify.go:221] Checking for updates...
	I1213 12:02:49.249923  620795 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 12:02:49.252821  620795 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:02:49.255716  620795 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 12:02:49.258605  620795 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 12:02:49.261497  620795 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 12:02:49.264842  620795 config.go:182] Loaded profile config "newest-cni-800979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:02:49.265447  620795 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 12:02:49.298976  620795 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 12:02:49.299102  620795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:02:49.360087  620795 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 12:02:49.350373468 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:02:49.360195  620795 docker.go:319] overlay module found
	I1213 12:02:49.363607  620795 out.go:179] * Using the docker driver based on existing profile
	I1213 12:02:49.366432  620795 start.go:309] selected driver: docker
	I1213 12:02:49.366449  620795 start.go:927] validating driver "docker" against &{Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:02:49.366561  620795 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 12:02:49.367304  620795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:02:49.420058  620795 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 12:02:49.411076686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:02:49.420394  620795 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 12:02:49.420426  620795 cni.go:84] Creating CNI manager for ""
	I1213 12:02:49.420475  620795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 12:02:49.420519  620795 start.go:353] cluster config:
	{Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:02:49.425561  620795 out.go:179] * Starting "newest-cni-800979" primary control-plane node in "newest-cni-800979" cluster
	I1213 12:02:49.428357  620795 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 12:02:49.431401  620795 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 12:02:49.434172  620795 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 12:02:49.434226  620795 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 12:02:49.434240  620795 cache.go:65] Caching tarball of preloaded images
	I1213 12:02:49.434255  620795 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 12:02:49.434334  620795 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 12:02:49.434345  620795 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 12:02:49.434462  620795 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json ...
	I1213 12:02:49.454054  620795 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 12:02:49.454078  620795 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 12:02:49.454100  620795 cache.go:243] Successfully downloaded all kic artifacts
	I1213 12:02:49.454140  620795 start.go:360] acquireMachinesLock for newest-cni-800979: {Name:mk98646479cdf6b123b7b6024833c6594650d415 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:02:49.454209  620795 start.go:364] duration metric: took 40.944µs to acquireMachinesLock for "newest-cni-800979"
	I1213 12:02:49.454234  620795 start.go:96] Skipping create...Using existing machine configuration
	I1213 12:02:49.454240  620795 fix.go:54] fixHost starting: 
	I1213 12:02:49.454523  620795 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 12:02:49.472085  620795 fix.go:112] recreateIfNeeded on newest-cni-800979: state=Stopped err=<nil>
	W1213 12:02:49.472121  620795 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 12:02:49.475488  620795 out.go:252] * Restarting existing docker container for "newest-cni-800979" ...
	I1213 12:02:49.475615  620795 cli_runner.go:164] Run: docker start newest-cni-800979
	I1213 12:02:49.733455  620795 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 12:02:49.759707  620795 kic.go:430] container "newest-cni-800979" state is running.
	I1213 12:02:49.760119  620795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 12:02:49.786102  620795 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json ...
	I1213 12:02:49.786339  620795 machine.go:94] provisionDockerMachine start ...
	I1213 12:02:49.786415  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:49.818261  620795 main.go:143] libmachine: Using SSH client type: native
	I1213 12:02:49.818584  620795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1213 12:02:49.818599  620795 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 12:02:49.819284  620795 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 12:02:52.971159  620795 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-800979
	
	I1213 12:02:52.971183  620795 ubuntu.go:182] provisioning hostname "newest-cni-800979"
	I1213 12:02:52.971255  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:52.989036  620795 main.go:143] libmachine: Using SSH client type: native
	I1213 12:02:52.989363  620795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1213 12:02:52.989383  620795 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-800979 && echo "newest-cni-800979" | sudo tee /etc/hostname
	I1213 12:02:53.149336  620795 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-800979
	
	I1213 12:02:53.149444  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:53.167147  620795 main.go:143] libmachine: Using SSH client type: native
	I1213 12:02:53.167461  620795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1213 12:02:53.167485  620795 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-800979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-800979/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-800979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 12:02:53.315867  620795 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 12:02:53.315938  620795 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 12:02:53.315974  620795 ubuntu.go:190] setting up certificates
	I1213 12:02:53.316007  620795 provision.go:84] configureAuth start
	I1213 12:02:53.316088  620795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 12:02:53.333295  620795 provision.go:143] copyHostCerts
	I1213 12:02:53.333374  620795 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 12:02:53.333389  620795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 12:02:53.333473  620795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 12:02:53.333584  620795 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 12:02:53.333595  620795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 12:02:53.333624  620795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 12:02:53.333688  620795 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 12:02:53.333695  620795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 12:02:53.333721  620795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 12:02:53.333777  620795 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.newest-cni-800979 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-800979]
	I1213 12:02:53.395970  620795 provision.go:177] copyRemoteCerts
	I1213 12:02:53.396040  620795 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 12:02:53.396087  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:53.418352  620795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 12:02:53.528800  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 12:02:53.552957  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 12:02:53.574405  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 12:02:53.600909  620795 provision.go:87] duration metric: took 284.882424ms to configureAuth
	I1213 12:02:53.600947  620795 ubuntu.go:206] setting minikube options for container-runtime
	I1213 12:02:53.601229  620795 config.go:182] Loaded profile config "newest-cni-800979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:02:53.601368  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:53.620841  620795 main.go:143] libmachine: Using SSH client type: native
	I1213 12:02:53.621175  620795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1213 12:02:53.621196  620795 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 12:02:53.931497  620795 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 12:02:53.931546  620795 machine.go:97] duration metric: took 4.145187533s to provisionDockerMachine
	I1213 12:02:53.931564  620795 start.go:293] postStartSetup for "newest-cni-800979" (driver="docker")
	I1213 12:02:53.931581  620795 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 12:02:53.931661  620795 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 12:02:53.931721  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:53.951503  620795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 12:02:54.064288  620795 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 12:02:54.068121  620795 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 12:02:54.068153  620795 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 12:02:54.068165  620795 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 12:02:54.068219  620795 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 12:02:54.068306  620795 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 12:02:54.068414  620795 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 12:02:54.076516  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 12:02:54.095247  620795 start.go:296] duration metric: took 163.663698ms for postStartSetup
	I1213 12:02:54.095344  620795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 12:02:54.095390  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:54.113108  620795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 12:02:54.216773  620795 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 12:02:54.221562  620795 fix.go:56] duration metric: took 4.76731447s for fixHost
	I1213 12:02:54.221592  620795 start.go:83] releasing machines lock for "newest-cni-800979", held for 4.767370191s
	I1213 12:02:54.221679  620795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 12:02:54.239044  620795 ssh_runner.go:195] Run: cat /version.json
	I1213 12:02:54.239115  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:54.239379  620795 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 12:02:54.239436  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:54.257600  620795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 12:02:54.258181  620795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 12:02:54.472852  620795 ssh_runner.go:195] Run: systemctl --version
	I1213 12:02:54.480091  620795 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 12:02:54.517083  620795 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 12:02:54.521766  620795 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 12:02:54.521872  620795 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 12:02:54.530083  620795 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 12:02:54.530110  620795 start.go:496] detecting cgroup driver to use...
	I1213 12:02:54.530144  620795 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 12:02:54.530193  620795 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 12:02:54.545964  620795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 12:02:54.559256  620795 docker.go:218] disabling cri-docker service (if available) ...
	I1213 12:02:54.559343  620795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 12:02:54.575678  620795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 12:02:54.589520  620795 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 12:02:54.710146  620795 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 12:02:54.827028  620795 docker.go:234] disabling docker service ...
	I1213 12:02:54.827095  620795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 12:02:54.842094  620795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 12:02:54.855410  620795 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 12:02:54.972511  620795 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 12:02:55.125284  620795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 12:02:55.138359  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 12:02:55.153286  620795 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 12:02:55.153415  620795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:02:55.163260  620795 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 12:02:55.163390  620795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:02:55.174114  620795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:02:55.184426  620795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:02:55.194168  620795 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 12:02:55.203273  620795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:02:55.213465  620795 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:02:55.223135  620795 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:02:55.232693  620795 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 12:02:55.241786  620795 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 12:02:55.250375  620795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:02:55.372259  620795 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 12:02:55.566896  620795 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 12:02:55.566968  620795 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 12:02:55.570910  620795 start.go:564] Will wait 60s for crictl version
	I1213 12:02:55.570982  620795 ssh_runner.go:195] Run: which crictl
	I1213 12:02:55.574692  620795 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 12:02:55.599155  620795 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 12:02:55.599241  620795 ssh_runner.go:195] Run: crio --version
	I1213 12:02:55.632146  620795 ssh_runner.go:195] Run: crio --version
	I1213 12:02:55.666590  620795 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 12:02:55.669593  620795 cli_runner.go:164] Run: docker network inspect newest-cni-800979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 12:02:55.685409  620795 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1213 12:02:55.689403  620795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:02:55.701909  620795 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 12:02:55.704755  620795 kubeadm.go:884] updating cluster {Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 12:02:55.704897  620795 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 12:02:55.704972  620795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 12:02:55.736637  620795 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 12:02:55.736659  620795 crio.go:433] Images already preloaded, skipping extraction
	I1213 12:02:55.736712  620795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 12:02:55.768016  620795 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 12:02:55.768037  620795 cache_images.go:86] Images are preloaded, skipping loading
	I1213 12:02:55.768046  620795 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 12:02:55.768149  620795 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-800979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 12:02:55.768237  620795 ssh_runner.go:195] Run: crio config
	I1213 12:02:55.851308  620795 cni.go:84] Creating CNI manager for ""
	I1213 12:02:55.851342  620795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 12:02:55.851379  620795 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 12:02:55.851413  620795 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-800979 NodeName:newest-cni-800979 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 12:02:55.851664  620795 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-800979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 12:02:55.852108  620795 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 12:02:55.864605  620795 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 12:02:55.864684  620795 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 12:02:55.872619  620795 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 12:02:55.885648  620795 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 12:02:55.898455  620795 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1213 12:02:55.911158  620795 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1213 12:02:55.914686  620795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:02:55.924267  620795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:02:56.039465  620795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:02:56.056121  620795 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979 for IP: 192.168.94.2
	I1213 12:02:56.056196  620795 certs.go:195] generating shared ca certs ...
	I1213 12:02:56.056229  620795 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:02:56.056418  620795 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 12:02:56.056488  620795 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 12:02:56.056512  620795 certs.go:257] generating profile certs ...
	I1213 12:02:56.056675  620795 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.key
	I1213 12:02:56.056781  620795 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606
	I1213 12:02:56.056855  620795 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key
	I1213 12:02:56.057048  620795 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 12:02:56.057114  620795 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 12:02:56.057138  620795 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 12:02:56.057199  620795 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 12:02:56.057251  620795 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 12:02:56.057311  620795 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 12:02:56.057397  620795 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 12:02:56.058029  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 12:02:56.076025  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 12:02:56.093944  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 12:02:56.111895  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 12:02:56.130506  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 12:02:56.150046  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 12:02:56.168049  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 12:02:56.185529  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 12:02:56.203246  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 12:02:56.222133  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 12:02:56.239211  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 12:02:56.256762  620795 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 12:02:56.269543  620795 ssh_runner.go:195] Run: openssl version
	I1213 12:02:56.276330  620795 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:02:56.283718  620795 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 12:02:56.291258  620795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:02:56.295162  620795 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:02:56.295230  620795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:02:56.336913  620795 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 12:02:56.344459  620795 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 12:02:56.351711  620795 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 12:02:56.359139  620795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 12:02:56.362888  620795 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 12:02:56.362953  620795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 12:02:56.404066  620795 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 12:02:56.411918  620795 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 12:02:56.419334  620795 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 12:02:56.427027  620795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 12:02:56.430803  620795 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 12:02:56.430872  620795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 12:02:56.472238  620795 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 12:02:56.479691  620795 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 12:02:56.483350  620795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 12:02:56.525393  620795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 12:02:56.566931  620795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 12:02:56.609192  620795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 12:02:56.652105  620795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 12:02:56.693040  620795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 12:02:56.733855  620795 kubeadm.go:401] StartCluster: {Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:02:56.733948  620795 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 12:02:56.734009  620795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 12:02:56.761731  620795 cri.go:89] found id: ""
	I1213 12:02:56.761801  620795 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 12:02:56.769616  620795 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 12:02:56.769635  620795 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 12:02:56.769685  620795 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 12:02:56.777201  620795 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 12:02:56.777595  620795 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-800979" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:02:56.777697  620795 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-354468/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-800979" cluster setting kubeconfig missing "newest-cni-800979" context setting]
	I1213 12:02:56.777977  620795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:02:56.779229  620795 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 12:02:56.788235  620795 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1213 12:02:56.788267  620795 kubeadm.go:602] duration metric: took 18.626939ms to restartPrimaryControlPlane
	I1213 12:02:56.788277  620795 kubeadm.go:403] duration metric: took 54.43387ms to StartCluster
	I1213 12:02:56.788293  620795 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:02:56.788354  620795 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:02:56.788977  620795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:02:56.789180  620795 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 12:02:56.789467  620795 config.go:182] Loaded profile config "newest-cni-800979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:02:56.789509  620795 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 12:02:56.789573  620795 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-800979"
	I1213 12:02:56.789587  620795 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-800979"
	I1213 12:02:56.789612  620795 host.go:66] Checking if "newest-cni-800979" exists ...
	I1213 12:02:56.789613  620795 addons.go:70] Setting dashboard=true in profile "newest-cni-800979"
	I1213 12:02:56.789672  620795 addons.go:239] Setting addon dashboard=true in "newest-cni-800979"
	W1213 12:02:56.789696  620795 addons.go:248] addon dashboard should already be in state true
	I1213 12:02:56.789736  620795 host.go:66] Checking if "newest-cni-800979" exists ...
	I1213 12:02:56.790085  620795 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 12:02:56.790244  620795 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 12:02:56.792126  620795 addons.go:70] Setting default-storageclass=true in profile "newest-cni-800979"
	I1213 12:02:56.792166  620795 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-800979"
	I1213 12:02:56.792556  620795 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 12:02:56.795330  620795 out.go:179] * Verifying Kubernetes components...
	I1213 12:02:56.801331  620795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:02:56.852170  620795 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 12:02:56.854454  620795 addons.go:239] Setting addon default-storageclass=true in "newest-cni-800979"
	I1213 12:02:56.854495  620795 host.go:66] Checking if "newest-cni-800979" exists ...
	I1213 12:02:56.854919  620795 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 12:02:56.855145  620795 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:02:56.855170  620795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 12:02:56.855216  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:56.855504  620795 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 12:02:56.858456  620795 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 12:02:56.862056  620795 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 12:02:56.862082  620795 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 12:02:56.862152  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:56.899687  620795 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 12:02:56.899709  620795 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 12:02:56.899772  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:56.927481  620795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 12:02:56.946897  620795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 12:02:56.963888  620795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 12:02:57.047178  620795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:02:57.085323  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:02:57.106670  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 12:02:57.109541  620795 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 12:02:57.109565  620795 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 12:02:57.129099  620795 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 12:02:57.129124  620795 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 12:02:57.143152  620795 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 12:02:57.143228  620795 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 12:02:57.157748  620795 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 12:02:57.157771  620795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 12:02:57.171682  620795 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 12:02:57.171707  620795 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 12:02:57.204203  620795 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 12:02:57.204229  620795 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 12:02:57.216958  620795 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 12:02:57.216983  620795 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 12:02:57.231106  620795 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 12:02:57.231131  620795 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 12:02:57.244346  620795 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:02:57.244370  620795 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 12:02:57.257080  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:02:57.772618  620795 api_server.go:52] waiting for apiserver process to appear ...
	I1213 12:02:57.772982  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:02:57.772808  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:02:57.773078  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:57.773118  620795 retry.go:31] will retry after 217.005737ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:57.773170  620795 retry.go:31] will retry after 239.962871ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:02:57.772882  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:57.773194  620795 retry.go:31] will retry after 147.663386ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:57.921773  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:02:57.978386  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:57.978421  620795 retry.go:31] will retry after 228.081406ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:57.990577  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:02:58.014070  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:02:58.127933  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:58.128026  620795 retry.go:31] will retry after 373.102827ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:02:58.127984  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:58.128061  620795 retry.go:31] will retry after 369.212229ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:58.207107  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:02:58.267352  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:58.267385  620795 retry.go:31] will retry after 334.48336ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:58.273686  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:02:58.497842  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:02:58.501298  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:02:58.602795  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:02:58.629431  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:58.629513  620795 retry.go:31] will retry after 680.299436ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:02:58.629708  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:58.629741  620795 retry.go:31] will retry after 376.262259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:02:58.684645  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:58.684691  620795 retry.go:31] will retry after 1.200875286s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:58.773900  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:02:59.007198  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:02:59.067125  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.067165  620795 retry.go:31] will retry after 592.59933ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	
	
	==> CRI-O <==
	Dec 13 11:52:31 no-preload-307409 crio[836]: time="2025-12-13T11:52:31.116588744Z" level=info msg="Image registry.k8s.io/kube-scheduler:v1.35.0-beta.0 not found" id=003f9cb8-ef73-477c-9f7e-cd7904ad42ea name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:31 no-preload-307409 crio[836]: time="2025-12-13T11:52:31.116681922Z" level=info msg="Neither image nor artfiact registry.k8s.io/kube-scheduler:v1.35.0-beta.0 found" id=003f9cb8-ef73-477c-9f7e-cd7904ad42ea name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:31 no-preload-307409 crio[836]: time="2025-12-13T11:52:31.779768303Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4462a0b2-6e23-4130-823a-3449eee15424 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:31 no-preload-307409 crio[836]: time="2025-12-13T11:52:31.779939299Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=4462a0b2-6e23-4130-823a-3449eee15424 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:31 no-preload-307409 crio[836]: time="2025-12-13T11:52:31.779997318Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=4462a0b2-6e23-4130-823a-3449eee15424 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:32 no-preload-307409 crio[836]: time="2025-12-13T11:52:32.117107034Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=089154b9-cbe2-4530-82d0-0b41da643c1c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:32 no-preload-307409 crio[836]: time="2025-12-13T11:52:32.11758611Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=089154b9-cbe2-4530-82d0-0b41da643c1c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:32 no-preload-307409 crio[836]: time="2025-12-13T11:52:32.117646903Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=089154b9-cbe2-4530-82d0-0b41da643c1c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:34 no-preload-307409 crio[836]: time="2025-12-13T11:52:34.342232553Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7365090d-a9c7-46f6-8c3c-dc876c1ffcf6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:34 no-preload-307409 crio[836]: time="2025-12-13T11:52:34.342586722Z" level=info msg="Image gcr.io/k8s-minikube/storage-provisioner:v5 not found" id=7365090d-a9c7-46f6-8c3c-dc876c1ffcf6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:34 no-preload-307409 crio[836]: time="2025-12-13T11:52:34.342639301Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/storage-provisioner:v5 found" id=7365090d-a9c7-46f6-8c3c-dc876c1ffcf6 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.33182054Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=43635d89-3bd4-44c2-825f-c8431c65dc6f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.335082522Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=b9aa7c65-27ab-4115-8617-40478e0c4431 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.336915661Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=ba139078-fdf0-4392-91a6-145cf5852d50 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.338604774Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=827565fb-635d-461a-bd67-b5ae5370ff66 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.339721074Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0529a105-853e-48a9-a6a2-0f2cc8e7d4de name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.344733068Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=d292bb3c-e44b-4d74-9c47-e804425ec1f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:52:48 no-preload-307409 crio[836]: time="2025-12-13T11:52:48.347983735Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5383fa2b-ffc4-4de0-8c1f-994389259392 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.616112342Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=3027b62a-b474-4ce9-a79a-b73a049c156c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.61769885Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=085f7430-a688-461e-929e-a810830d4d26 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.619174448Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=dcae434f-7a2a-45da-aecd-fe682d69c75c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.620679297Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=9c972166-33b8-4e43-8eb0-69fa78d92d4d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.621515325Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=f406858a-9da8-4255-acef-b33ba48d16bf name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.622872825Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=378a3dd0-9334-4c41-946c-b18ffb0ce982 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:56:52 no-preload-307409 crio[836]: time="2025-12-13T11:56:52.62375966Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=685e82ba-5807-4b97-bc6c-0036cf58fa30 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:03:00.990324    7156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:00.991021    7156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:00.992669    7156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:00.993065    7156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:00.994569    7156 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	[Dec13 11:46] overlayfs: idmapped layers are currently not supported
	[ +24.639766] overlayfs: idmapped layers are currently not supported
	[ +18.732422] overlayfs: idmapped layers are currently not supported
	[Dec13 11:47] overlayfs: idmapped layers are currently not supported
	[Dec13 11:48] overlayfs: idmapped layers are currently not supported
	[Dec13 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.618483] overlayfs: idmapped layers are currently not supported
	[Dec13 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.749488] overlayfs: idmapped layers are currently not supported
	[Dec13 11:52] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 12:03:01 up  3:45,  0 user,  load average: 0.52, 0.81, 1.43
	Linux no-preload-307409 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 12:02:58 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:02:58 no-preload-307409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 13 12:02:58 no-preload-307409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:02:58 no-preload-307409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:02:58 no-preload-307409 kubelet[7041]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:02:58 no-preload-307409 kubelet[7041]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:02:58 no-preload-307409 kubelet[7041]: E1213 12:02:58.825405    7041 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:02:58 no-preload-307409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:02:58 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:02:59 no-preload-307409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 13 12:02:59 no-preload-307409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:02:59 no-preload-307409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:02:59 no-preload-307409 kubelet[7051]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:02:59 no-preload-307409 kubelet[7051]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:02:59 no-preload-307409 kubelet[7051]: E1213 12:02:59.585013    7051 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:02:59 no-preload-307409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:02:59 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:03:00 no-preload-307409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 13 12:03:00 no-preload-307409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:03:00 no-preload-307409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:03:00 no-preload-307409 kubelet[7071]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:03:00 no-preload-307409 kubelet[7071]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:03:00 no-preload-307409 kubelet[7071]: E1213 12:03:00.470472    7071 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:03:00 no-preload-307409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:03:00 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-307409 -n no-preload-307409
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-307409 -n no-preload-307409: exit status 6 (342.687986ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 12:03:01.482499  622590 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-307409" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-307409" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (122.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (97.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-800979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1213 12:01:12.386447  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:02:00.471706  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:02:09.719049  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:02:27.931210  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-800979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m36.272441521s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-800979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-800979
helpers_test.go:244: (dbg) docker inspect newest-cni-800979:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef",
	        "Created": "2025-12-13T11:52:51.619651061Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 608187,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T11:52:51.70884903Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef/hosts",
	        "LogPath": "/var/lib/docker/containers/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef-json.log",
	        "Name": "/newest-cni-800979",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-800979:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-800979",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef",
	                "LowerDir": "/var/lib/docker/overlay2/c7d2cc87bdf8f5a9a60e544f17bca9528f6384a57e9d470177b306242d8113d5-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7d2cc87bdf8f5a9a60e544f17bca9528f6384a57e9d470177b306242d8113d5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7d2cc87bdf8f5a9a60e544f17bca9528f6384a57e9d470177b306242d8113d5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7d2cc87bdf8f5a9a60e544f17bca9528f6384a57e9d470177b306242d8113d5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-800979",
	                "Source": "/var/lib/docker/volumes/newest-cni-800979/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-800979",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-800979",
	                "name.minikube.sigs.k8s.io": "newest-cni-800979",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "05cea40e8c1eaa213015e5d86b7630be51a595e18678344c509541c6234a6461",
	            "SandboxKey": "/var/run/docker/netns/05cea40e8c1e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-800979": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:d0:81:44:f6:85",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de59fc08c8081b0c37df8bacf82db2ccccb307596588e9c22d7d094938935e3c",
	                    "EndpointID": "748f656075b24b4919ccd977616a9f21ba5987f640fc9fc2eca0de1a70fbf555",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-800979",
	                        "4aef671a766b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-800979 -n newest-cni-800979
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-800979 -n newest-cni-800979: exit status 6 (324.3274ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 12:02:46.487424  620285 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-800979" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-800979 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p cert-expiration-420007 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                            │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:49 UTC │ 13 Dec 25 11:50 UTC │
	│ delete  │ -p cert-expiration-420007                                                                                                                                                                                                                            │ cert-expiration-420007       │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:50 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-151605 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-151605 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:50 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-151605 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p embed-certs-326948 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │                     │
	│ stop    │ -p embed-certs-326948 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-326948 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:52 UTC │
	│ image   │ default-k8s-diff-port-151605 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p default-k8s-diff-port-151605 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p disable-driver-mounts-072590                                                                                                                                                                                                                      │ disable-driver-mounts-072590 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ image   │ embed-certs-326948 image list --format=json                                                                                                                                                                                                          │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p embed-certs-326948 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p embed-certs-326948                                                                                                                                                                                                                                │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p embed-certs-326948                                                                                                                                                                                                                                │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p newest-cni-800979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-307409 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:00 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-800979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 11:52:44
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 11:52:44.222945  607523 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:52:44.223057  607523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:44.223099  607523 out.go:374] Setting ErrFile to fd 2...
	I1213 11:52:44.223106  607523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:52:44.223364  607523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:52:44.223812  607523 out.go:368] Setting JSON to false
	I1213 11:52:44.224724  607523 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12917,"bootTime":1765613848,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:52:44.224797  607523 start.go:143] virtualization:  
	I1213 11:52:44.228935  607523 out.go:179] * [newest-cni-800979] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:52:44.232087  607523 notify.go:221] Checking for updates...
	I1213 11:52:44.232862  607523 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:52:44.236046  607523 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:52:44.241086  607523 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:52:44.244482  607523 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:52:44.247343  607523 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:52:44.250267  607523 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:52:44.253709  607523 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:52:44.253853  607523 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:52:44.284666  607523 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:52:44.284774  607523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:52:44.401910  607523 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-13 11:52:44.38729859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:52:44.402031  607523 docker.go:319] overlay module found
	I1213 11:52:44.405585  607523 out.go:179] * Using the docker driver based on user configuration
	I1213 11:52:44.408428  607523 start.go:309] selected driver: docker
	I1213 11:52:44.408454  607523 start.go:927] validating driver "docker" against <nil>
	I1213 11:52:44.408468  607523 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:52:44.409713  607523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:52:44.548406  607523 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:66 SystemTime:2025-12-13 11:52:44.53777287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:52:44.548555  607523 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1213 11:52:44.548581  607523 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1213 11:52:44.549476  607523 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 11:52:44.552258  607523 out.go:179] * Using Docker driver with root privileges
	I1213 11:52:44.555279  607523 cni.go:84] Creating CNI manager for ""
	I1213 11:52:44.555356  607523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:52:44.555365  607523 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 11:52:44.555448  607523 start.go:353] cluster config:
	{Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:52:44.558889  607523 out.go:179] * Starting "newest-cni-800979" primary control-plane node in "newest-cni-800979" cluster
	I1213 11:52:44.561893  607523 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 11:52:44.564946  607523 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 11:52:44.567939  607523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:44.568029  607523 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 11:52:44.568050  607523 cache.go:65] Caching tarball of preloaded images
	I1213 11:52:44.568145  607523 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 11:52:44.568156  607523 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 11:52:44.568295  607523 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json ...
	I1213 11:52:44.568315  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json: {Name:mkca051d0f4222f12ada2e542e9765aa1caaa1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:44.568460  607523 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 11:52:44.614235  607523 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 11:52:44.614511  607523 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 11:52:44.614568  607523 cache.go:243] Successfully downloaded all kic artifacts
	I1213 11:52:44.614617  607523 start.go:360] acquireMachinesLock for newest-cni-800979: {Name:mk98646479cdf6b123b7b6024833c6594650d415 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 11:52:44.614732  607523 start.go:364] duration metric: took 92.595µs to acquireMachinesLock for "newest-cni-800979"
	I1213 11:52:44.614763  607523 start.go:93] Provisioning new machine with config: &{Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 11:52:44.614850  607523 start.go:125] createHost starting for "" (driver="docker")
	I1213 11:52:43.447904  603921 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.748996566s)
	I1213 11:52:43.447934  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1213 11:52:43.447952  603921 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1213 11:52:43.448001  603921 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1213 11:52:44.178615  603921 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1213 11:52:44.178655  603921 cache_images.go:125] Successfully loaded all cached images
	I1213 11:52:44.178662  603921 cache_images.go:94] duration metric: took 13.878753268s to LoadCachedImages
	I1213 11:52:44.178674  603921 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 11:52:44.178763  603921 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-307409 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:52:44.178851  603921 ssh_runner.go:195] Run: crio config
	I1213 11:52:44.242383  603921 cni.go:84] Creating CNI manager for ""
	I1213 11:52:44.242401  603921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:52:44.242418  603921 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 11:52:44.242441  603921 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-307409 NodeName:no-preload-307409 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:52:44.242555  603921 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-307409"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:52:44.242622  603921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:52:44.254521  603921 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1213 11:52:44.254582  603921 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:52:44.274613  603921 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1213 11:52:44.274705  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1213 11:52:44.275568  603921 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet
	I1213 11:52:44.278466  603921 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm
	I1213 11:52:44.279131  603921 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1213 11:52:44.279162  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1213 11:52:45.122331  603921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:52:45.166456  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1213 11:52:45.191725  603921 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1213 11:52:45.191781  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1213 11:52:45.304315  603921 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1213 11:52:45.334054  603921 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1213 11:52:45.334112  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1213 11:52:46.015388  603921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:52:46.024888  603921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 11:52:46.040762  603921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:52:46.056856  603921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 11:52:46.080441  603921 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:52:46.084885  603921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:52:46.097815  603921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:52:46.230479  603921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:52:46.251958  603921 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409 for IP: 192.168.85.2
	I1213 11:52:46.251982  603921 certs.go:195] generating shared ca certs ...
	I1213 11:52:46.251998  603921 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:46.252212  603921 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:52:46.252287  603921 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:52:46.252302  603921 certs.go:257] generating profile certs ...
	I1213 11:52:46.252373  603921 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key
	I1213 11:52:46.252392  603921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.crt with IP's: []
	I1213 11:52:46.687159  603921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.crt ...
	I1213 11:52:46.687196  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.crt: {Name:mkd3b6de93eb4d0d7c38606e110ec8041a7a8b50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:46.687382  603921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key ...
	I1213 11:52:46.687530  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key: {Name:mk69f4e38edb3a6758b30b8919bec09ed6524780 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:46.687680  603921 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b
	I1213 11:52:46.687705  603921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1213 11:52:47.101196  603921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b ...
	I1213 11:52:47.101275  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b: {Name:mkf348306e6448fd779f0c40568bfbc2591db27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.101515  603921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b ...
	I1213 11:52:47.101554  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b: {Name:mk67006fcc87c7852dc9dd2baf2e5c091f89fb64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.101697  603921 certs.go:382] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt.a40dac7b -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt
	I1213 11:52:47.101816  603921 certs.go:386] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key
	I1213 11:52:47.101906  603921 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key
	I1213 11:52:47.101964  603921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt with IP's: []
	I1213 11:52:47.391626  603921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt ...
	I1213 11:52:47.391702  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt: {Name:mk6bf9ff3c46be8a69edc887a1d740e84c930536 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.391910  603921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key ...
	I1213 11:52:47.391946  603921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key: {Name:mk5282a1a4966c51394d6aeb663ae12cef8b3a1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:47.392186  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:52:47.392256  603921 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:52:47.392281  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:52:47.392345  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:52:47.392401  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:52:47.392449  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:52:47.392534  603921 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:52:47.393177  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:52:47.413169  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:52:47.433634  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:52:47.456446  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:52:47.475453  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:52:47.495921  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 11:52:47.516359  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:52:47.533557  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 11:52:47.553686  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:52:47.576528  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:52:47.595023  603921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:52:47.617574  603921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:52:47.632766  603921 ssh_runner.go:195] Run: openssl version
	I1213 11:52:47.642255  603921 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.651062  603921 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:52:47.660280  603921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.665117  603921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.665212  603921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:47.711366  603921 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:52:47.719094  603921 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 11:52:47.727218  603921 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.735147  603921 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:52:47.743430  603921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.748386  603921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.748477  603921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:52:47.811036  603921 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:52:47.824172  603921 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/356328.pem /etc/ssl/certs/51391683.0
	I1213 11:52:47.833720  603921 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.842937  603921 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:52:47.852257  603921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.857336  603921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.857459  603921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:52:47.913987  603921 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:47.923742  603921 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3563282.pem /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:47.932105  603921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:52:47.937831  603921 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 11:52:47.937953  603921 kubeadm.go:401] StartCluster: {Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:52:47.938056  603921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:52:47.938131  603921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:52:47.977617  603921 cri.go:89] found id: ""
	I1213 11:52:47.977734  603921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:52:47.986677  603921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:52:47.995428  603921 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:52:47.995568  603921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:52:48.012929  603921 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:52:48.013001  603921 kubeadm.go:158] found existing configuration files:
	
	I1213 11:52:48.013078  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:52:48.023587  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:52:48.023720  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:52:48.033048  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:52:48.042898  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:52:48.043030  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:52:48.052336  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:52:48.062442  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:52:48.062560  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:52:48.071404  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:52:48.081302  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:52:48.081415  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:52:48.090412  603921 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:52:48.139895  603921 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:52:48.140310  603921 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:52:48.244346  603921 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:52:48.244445  603921 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:52:48.244514  603921 kubeadm.go:319] OS: Linux
	I1213 11:52:48.244581  603921 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:52:48.244649  603921 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:52:48.244717  603921 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:52:48.244785  603921 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:52:48.244849  603921 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:52:48.244917  603921 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:52:48.244983  603921 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:52:48.245052  603921 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:52:48.245113  603921 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:52:48.326956  603921 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:52:48.327125  603921 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:52:48.327254  603921 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:52:48.353781  603921 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:52:44.618660  607523 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 11:52:44.618986  607523 start.go:159] libmachine.API.Create for "newest-cni-800979" (driver="docker")
	I1213 11:52:44.619024  607523 client.go:173] LocalClient.Create starting
	I1213 11:52:44.619095  607523 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem
	I1213 11:52:44.619134  607523 main.go:143] libmachine: Decoding PEM data...
	I1213 11:52:44.619169  607523 main.go:143] libmachine: Parsing certificate...
	I1213 11:52:44.619234  607523 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem
	I1213 11:52:44.619259  607523 main.go:143] libmachine: Decoding PEM data...
	I1213 11:52:44.619275  607523 main.go:143] libmachine: Parsing certificate...
	I1213 11:52:44.619828  607523 cli_runner.go:164] Run: docker network inspect newest-cni-800979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 11:52:44.681886  607523 cli_runner.go:211] docker network inspect newest-cni-800979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 11:52:44.682019  607523 network_create.go:284] running [docker network inspect newest-cni-800979] to gather additional debugging logs...
	I1213 11:52:44.682044  607523 cli_runner.go:164] Run: docker network inspect newest-cni-800979
	W1213 11:52:44.783263  607523 cli_runner.go:211] docker network inspect newest-cni-800979 returned with exit code 1
	I1213 11:52:44.783303  607523 network_create.go:287] error running [docker network inspect newest-cni-800979]: docker network inspect newest-cni-800979: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-800979 not found
	I1213 11:52:44.783456  607523 network_create.go:289] output of [docker network inspect newest-cni-800979]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-800979 not found
	
	** /stderr **
	I1213 11:52:44.783853  607523 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:52:44.869365  607523 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0545902499c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:32:4c:cb:8d:7b} reservation:<nil>}
	I1213 11:52:44.869936  607523 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-de5fe2fbe3b8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:54:47:7f:e7:3a} reservation:<nil>}
	I1213 11:52:44.870324  607523 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7c96683190e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:0a:60:46:c5:4a} reservation:<nil>}
	I1213 11:52:44.872231  607523 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1213 11:52:44.872625  607523 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-280e424abad6 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:5e:ad:5b:52:ee:cb} reservation:<nil>}
	I1213 11:52:44.873100  607523 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a0a730}
	I1213 11:52:44.873121  607523 network_create.go:124] attempt to create docker network newest-cni-800979 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1213 11:52:44.873186  607523 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-800979 newest-cni-800979
	I1213 11:52:45.033952  607523 network_create.go:108] docker network newest-cni-800979 192.168.94.0/24 created
	I1213 11:52:45.033989  607523 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-800979" container
	I1213 11:52:45.034089  607523 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 11:52:45.110922  607523 cli_runner.go:164] Run: docker volume create newest-cni-800979 --label name.minikube.sigs.k8s.io=newest-cni-800979 --label created_by.minikube.sigs.k8s.io=true
	I1213 11:52:45.147181  607523 oci.go:103] Successfully created a docker volume newest-cni-800979
	I1213 11:52:45.148756  607523 cli_runner.go:164] Run: docker run --rm --name newest-cni-800979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800979 --entrypoint /usr/bin/test -v newest-cni-800979:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 11:52:46.576150  607523 cli_runner.go:217] Completed: docker run --rm --name newest-cni-800979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800979 --entrypoint /usr/bin/test -v newest-cni-800979:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (1.427287827s)
	I1213 11:52:46.576182  607523 oci.go:107] Successfully prepared a docker volume newest-cni-800979
	I1213 11:52:46.576222  607523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:46.576231  607523 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 11:52:46.576286  607523 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-800979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 11:52:48.362615  603921 out.go:252]   - Generating certificates and keys ...
	I1213 11:52:48.362749  603921 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:52:48.362861  603921 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:52:48.406340  603921 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:52:48.617898  603921 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:52:48.894950  603921 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:52:49.002897  603921 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 11:52:49.595632  603921 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 11:52:49.596022  603921 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 11:52:49.703067  603921 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 11:52:49.703500  603921 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1213 11:52:49.852748  603921 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 11:52:49.985441  603921 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 11:52:50.361702  603921 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 11:52:50.362007  603921 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:52:50.448441  603921 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:52:50.524868  603921 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:52:51.254957  603921 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:52:51.473347  603921 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:52:51.686418  603921 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:52:51.686517  603921 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:52:51.690277  603921 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:52:51.694117  603921 out.go:252]   - Booting up control plane ...
	I1213 11:52:51.694231  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:52:51.694310  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:52:51.695018  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:52:51.714016  603921 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:52:51.714689  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:52:51.728439  603921 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:52:51.728548  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:52:51.728589  603921 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:52:51.918802  603921 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:52:51.918928  603921 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:52:51.477960  607523 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-800979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.901639858s)
	I1213 11:52:51.478004  607523 kic.go:203] duration metric: took 4.901755297s to extract preloaded images to volume ...
	W1213 11:52:51.478154  607523 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 11:52:51.478257  607523 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 11:52:51.600099  607523 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-800979 --name newest-cni-800979 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-800979 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-800979 --network newest-cni-800979 --ip 192.168.94.2 --volume newest-cni-800979:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 11:52:52.003446  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Running}}
	I1213 11:52:52.025630  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 11:52:52.044945  607523 cli_runner.go:164] Run: docker exec newest-cni-800979 stat /var/lib/dpkg/alternatives/iptables
	I1213 11:52:52.103780  607523 oci.go:144] the created container "newest-cni-800979" has a running status.
	I1213 11:52:52.103827  607523 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa...
	I1213 11:52:52.454986  607523 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 11:52:52.499855  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 11:52:52.520167  607523 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 11:52:52.520186  607523 kic_runner.go:114] Args: [docker exec --privileged newest-cni-800979 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 11:52:52.595209  607523 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 11:52:52.616614  607523 machine.go:94] provisionDockerMachine start ...
	I1213 11:52:52.616710  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:52.645695  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:52.646054  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:52.646065  607523 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 11:52:52.646853  607523 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49104->127.0.0.1:33463: read: connection reset by peer
	I1213 11:52:55.795509  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-800979
	
	I1213 11:52:55.795546  607523 ubuntu.go:182] provisioning hostname "newest-cni-800979"
	I1213 11:52:55.795609  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:55.823768  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:55.824086  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:55.824105  607523 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-800979 && echo "newest-cni-800979" | sudo tee /etc/hostname
	I1213 11:52:55.984531  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-800979
	
	I1213 11:52:55.984627  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:56.004427  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:56.004789  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:56.004806  607523 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-800979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-800979/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-800979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 11:52:56.155779  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 11:52:56.155809  607523 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 11:52:56.155840  607523 ubuntu.go:190] setting up certificates
	I1213 11:52:56.155849  607523 provision.go:84] configureAuth start
	I1213 11:52:56.155916  607523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 11:52:56.173051  607523 provision.go:143] copyHostCerts
	I1213 11:52:56.173126  607523 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 11:52:56.173140  607523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 11:52:56.173218  607523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 11:52:56.173314  607523 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 11:52:56.173326  607523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 11:52:56.173354  607523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 11:52:56.173407  607523 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 11:52:56.173416  607523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 11:52:56.173440  607523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 11:52:56.173493  607523 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.newest-cni-800979 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-800979]
	I1213 11:52:56.495741  607523 provision.go:177] copyRemoteCerts
	I1213 11:52:56.495819  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 11:52:56.495860  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:56.513776  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:56.623272  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 11:52:56.640893  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 11:52:56.658251  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 11:52:56.675898  607523 provision.go:87] duration metric: took 520.035144ms to configureAuth
	I1213 11:52:56.675924  607523 ubuntu.go:206] setting minikube options for container-runtime
	I1213 11:52:56.676119  607523 config.go:182] Loaded profile config "newest-cni-800979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:52:56.676229  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:56.693573  607523 main.go:143] libmachine: Using SSH client type: native
	I1213 11:52:56.693885  607523 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1213 11:52:56.693913  607523 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 11:52:57.000433  607523 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 11:52:57.000459  607523 machine.go:97] duration metric: took 4.383824523s to provisionDockerMachine
	I1213 11:52:57.000471  607523 client.go:176] duration metric: took 12.381437402s to LocalClient.Create
	I1213 11:52:57.000485  607523 start.go:167] duration metric: took 12.381502329s to libmachine.API.Create "newest-cni-800979"
	I1213 11:52:57.000493  607523 start.go:293] postStartSetup for "newest-cni-800979" (driver="docker")
	I1213 11:52:57.000506  607523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 11:52:57.000573  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 11:52:57.000635  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.019654  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.123498  607523 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 11:52:57.126887  607523 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 11:52:57.126915  607523 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 11:52:57.126942  607523 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 11:52:57.127003  607523 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 11:52:57.127090  607523 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 11:52:57.127193  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 11:52:57.134628  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:52:57.153601  607523 start.go:296] duration metric: took 153.093637ms for postStartSetup
	I1213 11:52:57.154022  607523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 11:52:57.174170  607523 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json ...
	I1213 11:52:57.174465  607523 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:52:57.174516  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.191003  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.300652  607523 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 11:52:57.305941  607523 start.go:128] duration metric: took 12.691075107s to createHost
	I1213 11:52:57.305969  607523 start.go:83] releasing machines lock for "newest-cni-800979", held for 12.691222882s
	I1213 11:52:57.306067  607523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 11:52:57.324383  607523 ssh_runner.go:195] Run: cat /version.json
	I1213 11:52:57.324411  607523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 11:52:57.324436  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.324473  607523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 11:52:57.349379  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.349454  607523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 11:52:57.540188  607523 ssh_runner.go:195] Run: systemctl --version
	I1213 11:52:57.546743  607523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 11:52:57.581981  607523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 11:52:57.586210  607523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 11:52:57.586277  607523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 11:52:57.614440  607523 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 11:52:57.614460  607523 start.go:496] detecting cgroup driver to use...
	I1213 11:52:57.614492  607523 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 11:52:57.614539  607523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 11:52:57.632118  607523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 11:52:57.645277  607523 docker.go:218] disabling cri-docker service (if available) ...
	I1213 11:52:57.645361  607523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 11:52:57.663447  607523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 11:52:57.682384  607523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 11:52:57.805277  607523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 11:52:57.932514  607523 docker.go:234] disabling docker service ...
	I1213 11:52:57.932589  607523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 11:52:57.955202  607523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 11:52:57.968354  607523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 11:52:58.113128  607523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 11:52:58.247772  607523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 11:52:58.262298  607523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 11:52:58.277400  607523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 11:52:58.277526  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.287200  607523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 11:52:58.287335  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.296697  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.305672  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.315083  607523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 11:52:58.324248  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.333206  607523 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.346564  607523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 11:52:58.355703  607523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 11:52:58.363253  607523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 11:52:58.370805  607523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:52:58.492125  607523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 11:52:58.663207  607523 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 11:52:58.663336  607523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 11:52:58.667219  607523 start.go:564] Will wait 60s for crictl version
	I1213 11:52:58.667334  607523 ssh_runner.go:195] Run: which crictl
	I1213 11:52:58.671116  607523 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 11:52:58.697501  607523 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 11:52:58.697619  607523 ssh_runner.go:195] Run: crio --version
	I1213 11:52:58.733197  607523 ssh_runner.go:195] Run: crio --version
	I1213 11:52:58.768647  607523 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 11:52:58.771459  607523 cli_runner.go:164] Run: docker network inspect newest-cni-800979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 11:52:58.789274  607523 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1213 11:52:58.795116  607523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:52:58.812164  607523 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 11:52:58.814926  607523 kubeadm.go:884] updating cluster {Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 11:52:58.815100  607523 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 11:52:58.815179  607523 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:52:58.855416  607523 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:52:58.855438  607523 crio.go:433] Images already preloaded, skipping extraction
	I1213 11:52:58.855493  607523 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 11:52:58.882823  607523 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 11:52:58.882846  607523 cache_images.go:86] Images are preloaded, skipping loading
	I1213 11:52:58.882855  607523 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 11:52:58.882940  607523 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-800979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 11:52:58.883028  607523 ssh_runner.go:195] Run: crio config
	I1213 11:52:58.937332  607523 cni.go:84] Creating CNI manager for ""
	I1213 11:52:58.937355  607523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 11:52:58.937377  607523 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 11:52:58.937402  607523 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-800979 NodeName:newest-cni-800979 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 11:52:58.937530  607523 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-800979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 11:52:58.937607  607523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 11:52:58.945256  607523 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 11:52:58.945332  607523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 11:52:58.952916  607523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 11:52:58.965421  607523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 11:52:58.978594  607523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1213 11:52:58.991343  607523 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1213 11:52:58.994981  607523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 11:52:59.006043  607523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 11:52:59.120731  607523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 11:52:59.136632  607523 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979 for IP: 192.168.94.2
	I1213 11:52:59.136650  607523 certs.go:195] generating shared ca certs ...
	I1213 11:52:59.136667  607523 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.136813  607523 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 11:52:59.136864  607523 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 11:52:59.136875  607523 certs.go:257] generating profile certs ...
	I1213 11:52:59.136930  607523 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.key
	I1213 11:52:59.136948  607523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.crt with IP's: []
	I1213 11:52:59.229537  607523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.crt ...
	I1213 11:52:59.229569  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.crt: {Name:mk69c62c6a65f19f1e9ae6f6006b84310e5ca69f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.229797  607523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.key ...
	I1213 11:52:59.229813  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.key: {Name:mk0d678e2df0ba46ea7a7d9db0beddac15d16cee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.229927  607523 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606
	I1213 11:52:59.229947  607523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1213 11:52:59.395722  607523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606 ...
	I1213 11:52:59.395753  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606: {Name:mk2f0d7037f2191b2fb310c8e6e39abce6919307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.395933  607523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606 ...
	I1213 11:52:59.395948  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606: {Name:mkeda4d05cf7f14a6919666348bb90fff24821e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.396035  607523 certs.go:382] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt.e5aab606 -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt
	I1213 11:52:59.396122  607523 certs.go:386] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606 -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key
	I1213 11:52:59.396187  607523 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key
	I1213 11:52:59.396205  607523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt with IP's: []
	I1213 11:52:59.677399  607523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt ...
	I1213 11:52:59.677431  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt: {Name:mk4f6f44ef9664fbc510805af3a0a5d8216b34d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.677617  607523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key ...
	I1213 11:52:59.677634  607523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key: {Name:mk08e1a717d212a6e36443fd4449253d4dfd4e34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 11:52:59.677867  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 11:52:59.677925  607523 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 11:52:59.677936  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 11:52:59.677963  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 11:52:59.677989  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 11:52:59.678018  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 11:52:59.678067  607523 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 11:52:59.678646  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 11:52:59.697504  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 11:52:59.715937  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 11:52:59.733272  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 11:52:59.751842  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 11:52:59.769868  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 11:52:59.787032  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 11:52:59.804197  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 11:52:59.822307  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 11:52:59.840119  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 11:52:59.857580  607523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 11:52:59.875033  607523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 11:52:59.887226  607523 ssh_runner.go:195] Run: openssl version
	I1213 11:52:59.893568  607523 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.900683  607523 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 11:52:59.907927  607523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.911699  607523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.911785  607523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 11:52:59.952546  607523 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:59.959999  607523 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3563282.pem /etc/ssl/certs/3ec20f2e.0
	I1213 11:52:59.967191  607523 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:59.974551  607523 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 11:52:59.981936  607523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:59.985667  607523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:52:59.985735  607523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 11:53:00.029636  607523 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 11:53:00.039949  607523 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 11:53:00.051259  607523 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.062203  607523 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 11:53:00.071922  607523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.077479  607523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.077644  607523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 11:53:00.129667  607523 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 11:53:00.145873  607523 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/356328.pem /etc/ssl/certs/51391683.0
	I1213 11:53:00.165719  607523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 11:53:00.182484  607523 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 11:53:00.182650  607523 kubeadm.go:401] StartCluster: {Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:53:00.191964  607523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 11:53:00.192781  607523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 11:53:00.308764  607523 cri.go:89] found id: ""
	I1213 11:53:00.308851  607523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 11:53:00.339801  607523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 11:53:00.369102  607523 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:53:00.369171  607523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:53:00.383298  607523 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:53:00.383367  607523 kubeadm.go:158] found existing configuration files:
	
	I1213 11:53:00.383424  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:53:00.395580  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:53:00.395656  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:53:00.405571  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:53:00.415778  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:53:00.415854  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:53:00.424800  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:53:00.434079  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:53:00.434162  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:53:00.443040  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:53:00.452144  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:53:00.452246  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:53:00.461542  607523 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:53:00.503183  607523 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:53:00.503307  607523 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:53:00.580961  607523 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:53:00.581064  607523 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:53:00.581117  607523 kubeadm.go:319] OS: Linux
	I1213 11:53:00.581167  607523 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:53:00.581226  607523 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:53:00.581277  607523 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:53:00.581327  607523 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:53:00.581379  607523 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:53:00.581429  607523 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:53:00.581478  607523 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:53:00.581529  607523 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:53:00.581581  607523 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:53:00.654422  607523 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:53:00.654539  607523 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:53:00.654635  607523 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:53:00.667854  607523 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:53:00.673949  607523 out.go:252]   - Generating certificates and keys ...
	I1213 11:53:00.674119  607523 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:53:00.674229  607523 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:53:00.749466  607523 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 11:53:00.853085  607523 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 11:53:01.087749  607523 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 11:53:01.312048  607523 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 11:53:01.513347  607523 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 11:53:01.513768  607523 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1213 11:53:01.838749  607523 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 11:53:01.839657  607523 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1213 11:53:02.478657  607523 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 11:53:02.876105  607523 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 11:53:03.010338  607523 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 11:53:03.010418  607523 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:53:03.200889  607523 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:53:03.653890  607523 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:53:04.344965  607523 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:53:04.580887  607523 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:53:04.785257  607523 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:53:04.787179  607523 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:53:04.796409  607523 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:53:04.799699  607523 out.go:252]   - Booting up control plane ...
	I1213 11:53:04.799829  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:53:04.799918  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:53:04.803001  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:53:04.836757  607523 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:53:04.837037  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:53:04.849469  607523 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:53:04.850109  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:53:04.853862  607523 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:53:05.015188  607523 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:53:05.015326  607523 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:56:51.920072  603921 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001224221s
	I1213 11:56:51.920104  603921 kubeadm.go:319] 
	I1213 11:56:51.920212  603921 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:56:51.920270  603921 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:56:51.920608  603921 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:56:51.920619  603921 kubeadm.go:319] 
	I1213 11:56:51.920812  603921 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:56:51.920869  603921 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:56:51.921157  603921 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:56:51.921165  603921 kubeadm.go:319] 
	I1213 11:56:51.925513  603921 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:56:51.926006  603921 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:56:51.926180  603921 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:56:51.926479  603921 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:56:51.926517  603921 kubeadm.go:319] 
	W1213 11:56:51.926771  603921 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-307409] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001224221s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 11:56:51.926983  603921 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 11:56:51.927241  603921 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 11:56:52.337349  603921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:56:52.355756  603921 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:56:52.355865  603921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:56:52.364798  603921 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:56:52.364819  603921 kubeadm.go:158] found existing configuration files:
	
	I1213 11:56:52.364872  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:56:52.373016  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:56:52.373085  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:56:52.380868  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:56:52.388839  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:56:52.388908  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:56:52.396493  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:56:52.404428  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:56:52.404492  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:56:52.412543  603921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:56:52.420710  603921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:56:52.420784  603921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:56:52.428931  603921 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:56:52.469486  603921 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:56:52.469812  603921 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:56:52.544538  603921 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:56:52.544634  603921 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:56:52.544691  603921 kubeadm.go:319] OS: Linux
	I1213 11:56:52.544758  603921 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:56:52.544826  603921 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:56:52.544893  603921 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:56:52.544959  603921 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:56:52.545027  603921 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:56:52.545094  603921 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:56:52.545159  603921 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:56:52.545225  603921 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:56:52.545290  603921 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:56:52.613010  603921 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:56:52.613120  603921 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:56:52.613213  603921 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:56:52.631911  603921 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:56:52.635687  603921 out.go:252]   - Generating certificates and keys ...
	I1213 11:56:52.635862  603921 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:56:52.635952  603921 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:56:52.636046  603921 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 11:56:52.636157  603921 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 11:56:52.636251  603921 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 11:56:52.636343  603921 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 11:56:52.636411  603921 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 11:56:52.636489  603921 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 11:56:52.636569  603921 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 11:56:52.636650  603921 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 11:56:52.636696  603921 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 11:56:52.636757  603921 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:56:52.776698  603921 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:56:52.958761  603921 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:56:53.117866  603921 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:56:53.292950  603921 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:56:53.736752  603921 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:56:53.737374  603921 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:56:53.739900  603921 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:56:53.743260  603921 out.go:252]   - Booting up control plane ...
	I1213 11:56:53.743409  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:56:53.743561  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:56:53.743673  603921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:56:53.757211  603921 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:56:53.757338  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:56:53.765875  603921 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:56:53.766984  603921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:56:53.767070  603921 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:56:53.918187  603921 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:56:53.918313  603921 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 11:57:05.013826  607523 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000267538s
	I1213 11:57:05.013870  607523 kubeadm.go:319] 
	I1213 11:57:05.013935  607523 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 11:57:05.013971  607523 kubeadm.go:319] 	- The kubelet is not running
	I1213 11:57:05.014088  607523 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 11:57:05.014096  607523 kubeadm.go:319] 
	I1213 11:57:05.014210  607523 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 11:57:05.014246  607523 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 11:57:05.014279  607523 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 11:57:05.014287  607523 kubeadm.go:319] 
	I1213 11:57:05.020057  607523 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 11:57:05.020490  607523 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 11:57:05.020604  607523 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 11:57:05.020844  607523 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1213 11:57:05.020856  607523 kubeadm.go:319] 
	I1213 11:57:05.020925  607523 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1213 11:57:05.021047  607523 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-800979] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000267538s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1213 11:57:05.021134  607523 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1213 11:57:05.432952  607523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:57:05.445933  607523 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 11:57:05.446023  607523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 11:57:05.454556  607523 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 11:57:05.454578  607523 kubeadm.go:158] found existing configuration files:
	
	I1213 11:57:05.454629  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 11:57:05.462597  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 11:57:05.462670  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 11:57:05.470456  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 11:57:05.478316  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 11:57:05.478382  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 11:57:05.485947  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 11:57:05.494252  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 11:57:05.494320  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 11:57:05.502133  607523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 11:57:05.510237  607523 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 11:57:05.510311  607523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 11:57:05.518001  607523 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 11:57:05.584840  607523 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1213 11:57:05.585142  607523 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 11:57:05.657959  607523 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 11:57:05.658125  607523 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 11:57:05.658198  607523 kubeadm.go:319] OS: Linux
	I1213 11:57:05.658288  607523 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 11:57:05.658378  607523 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 11:57:05.658471  607523 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 11:57:05.658558  607523 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 11:57:05.658635  607523 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 11:57:05.658730  607523 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 11:57:05.658813  607523 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 11:57:05.658915  607523 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 11:57:05.659000  607523 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 11:57:05.731597  607523 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 11:57:05.731775  607523 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 11:57:05.731903  607523 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 11:57:05.740855  607523 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 11:57:05.744423  607523 out.go:252]   - Generating certificates and keys ...
	I1213 11:57:05.744578  607523 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 11:57:05.744679  607523 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 11:57:05.744796  607523 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1213 11:57:05.744887  607523 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1213 11:57:05.744992  607523 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1213 11:57:05.745076  607523 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1213 11:57:05.745170  607523 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1213 11:57:05.745499  607523 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1213 11:57:05.745582  607523 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1213 11:57:05.745655  607523 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1213 11:57:05.745694  607523 kubeadm.go:319] [certs] Using the existing "sa" key
	I1213 11:57:05.745749  607523 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 11:57:05.913677  607523 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 11:57:06.384962  607523 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 11:57:07.036559  607523 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 11:57:07.437110  607523 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 11:57:07.602655  607523 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 11:57:07.603483  607523 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 11:57:07.607251  607523 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 11:57:07.612344  607523 out.go:252]   - Booting up control plane ...
	I1213 11:57:07.612453  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 11:57:07.612542  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 11:57:07.612663  607523 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 11:57:07.626734  607523 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 11:57:07.627071  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 11:57:07.634285  607523 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 11:57:07.634609  607523 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 11:57:07.634655  607523 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 11:57:07.773578  607523 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 11:57:07.773700  607523 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 12:00:53.918383  603921 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00010332s
	I1213 12:00:53.918411  603921 kubeadm.go:319] 
	I1213 12:00:53.918468  603921 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 12:00:53.918502  603921 kubeadm.go:319] 	- The kubelet is not running
	I1213 12:00:53.918607  603921 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 12:00:53.918611  603921 kubeadm.go:319] 
	I1213 12:00:53.918715  603921 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 12:00:53.918747  603921 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 12:00:53.918778  603921 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 12:00:53.918782  603921 kubeadm.go:319] 
	I1213 12:00:53.924880  603921 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 12:00:53.925344  603921 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 12:00:53.925460  603921 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 12:00:53.925729  603921 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 12:00:53.925740  603921 kubeadm.go:319] 
	I1213 12:00:53.925866  603921 kubeadm.go:403] duration metric: took 8m5.987919453s to StartCluster
	I1213 12:00:53.925907  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:00:53.925972  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:00:53.926107  603921 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 12:00:53.953173  603921 cri.go:89] found id: ""
	I1213 12:00:53.953257  603921 logs.go:282] 0 containers: []
	W1213 12:00:53.953275  603921 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:00:53.953283  603921 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:00:53.953363  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:00:53.984628  603921 cri.go:89] found id: ""
	I1213 12:00:53.984655  603921 logs.go:282] 0 containers: []
	W1213 12:00:53.984665  603921 logs.go:284] No container was found matching "etcd"
	I1213 12:00:53.984671  603921 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:00:53.984731  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:00:54.014942  603921 cri.go:89] found id: ""
	I1213 12:00:54.014969  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.014978  603921 logs.go:284] No container was found matching "coredns"
	I1213 12:00:54.014986  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:00:54.015045  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:00:54.064854  603921 cri.go:89] found id: ""
	I1213 12:00:54.064881  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.064890  603921 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:00:54.064897  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:00:54.064981  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:00:54.132162  603921 cri.go:89] found id: ""
	I1213 12:00:54.132187  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.132195  603921 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:00:54.132201  603921 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:00:54.132311  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:00:54.159680  603921 cri.go:89] found id: ""
	I1213 12:00:54.159703  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.159712  603921 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:00:54.159718  603921 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:00:54.159779  603921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:00:54.185867  603921 cri.go:89] found id: ""
	I1213 12:00:54.185893  603921 logs.go:282] 0 containers: []
	W1213 12:00:54.185902  603921 logs.go:284] No container was found matching "kindnet"
	I1213 12:00:54.185912  603921 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:00:54.185923  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:00:54.228270  603921 logs.go:123] Gathering logs for container status ...
	I1213 12:00:54.228303  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:00:54.257730  603921 logs.go:123] Gathering logs for kubelet ...
	I1213 12:00:54.257759  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:00:54.324854  603921 logs.go:123] Gathering logs for dmesg ...
	I1213 12:00:54.324892  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:00:54.342225  603921 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:00:54.342252  603921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:00:54.409722  603921 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:00:54.400901    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.401672    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403289    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403849    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.405570    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:00:54.400901    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.401672    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403289    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.403849    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:00:54.405570    5612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:00:54.409752  603921 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00010332s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 12:00:54.409821  603921 out.go:285] * 
	W1213 12:00:54.410005  603921 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00010332s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 12:00:54.410026  603921 out.go:285] * 
	W1213 12:00:54.412399  603921 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 12:00:54.417573  603921 out.go:203] 
	W1213 12:00:54.420481  603921 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00010332s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 12:00:54.420529  603921 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 12:00:54.420553  603921 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 12:00:54.423665  603921 out.go:203] 
	I1213 12:01:07.773320  607523 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000195913s
	I1213 12:01:07.773347  607523 kubeadm.go:319] 
	I1213 12:01:07.773405  607523 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1213 12:01:07.773438  607523 kubeadm.go:319] 	- The kubelet is not running
	I1213 12:01:07.773542  607523 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1213 12:01:07.773547  607523 kubeadm.go:319] 
	I1213 12:01:07.773652  607523 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1213 12:01:07.773685  607523 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1213 12:01:07.773715  607523 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1213 12:01:07.773720  607523 kubeadm.go:319] 
	I1213 12:01:07.777876  607523 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 12:01:07.778275  607523 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1213 12:01:07.778377  607523 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 12:01:07.778624  607523 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1213 12:01:07.778630  607523 kubeadm.go:319] 
	I1213 12:01:07.778695  607523 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1213 12:01:07.778746  607523 kubeadm.go:403] duration metric: took 8m7.596100369s to StartCluster
	I1213 12:01:07.778786  607523 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:01:07.778843  607523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:01:07.814673  607523 cri.go:89] found id: ""
	I1213 12:01:07.814694  607523 logs.go:282] 0 containers: []
	W1213 12:01:07.814703  607523 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:01:07.814709  607523 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:01:07.814771  607523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:01:07.872169  607523 cri.go:89] found id: ""
	I1213 12:01:07.872191  607523 logs.go:282] 0 containers: []
	W1213 12:01:07.872199  607523 logs.go:284] No container was found matching "etcd"
	I1213 12:01:07.872205  607523 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:01:07.872262  607523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:01:07.897159  607523 cri.go:89] found id: ""
	I1213 12:01:07.897183  607523 logs.go:282] 0 containers: []
	W1213 12:01:07.897192  607523 logs.go:284] No container was found matching "coredns"
	I1213 12:01:07.897198  607523 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:01:07.897271  607523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:01:07.926240  607523 cri.go:89] found id: ""
	I1213 12:01:07.926266  607523 logs.go:282] 0 containers: []
	W1213 12:01:07.926275  607523 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:01:07.926285  607523 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:01:07.926342  607523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:01:07.954071  607523 cri.go:89] found id: ""
	I1213 12:01:07.954144  607523 logs.go:282] 0 containers: []
	W1213 12:01:07.954168  607523 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:01:07.954187  607523 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:01:07.954259  607523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:01:07.980272  607523 cri.go:89] found id: ""
	I1213 12:01:07.980300  607523 logs.go:282] 0 containers: []
	W1213 12:01:07.980310  607523 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:01:07.980316  607523 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:01:07.980371  607523 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:01:08.011383  607523 cri.go:89] found id: ""
	I1213 12:01:08.011411  607523 logs.go:282] 0 containers: []
	W1213 12:01:08.011421  607523 logs.go:284] No container was found matching "kindnet"
	I1213 12:01:08.011431  607523 logs.go:123] Gathering logs for kubelet ...
	I1213 12:01:08.011442  607523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:01:08.079910  607523 logs.go:123] Gathering logs for dmesg ...
	I1213 12:01:08.079950  607523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:01:08.097373  607523 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:01:08.097401  607523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:01:08.160941  607523 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:01:08.153055    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.153840    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.155465    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.155845    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.157368    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:01:08.153055    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.153840    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.155465    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.155845    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:01:08.157368    4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:01:08.161010  607523 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:01:08.161029  607523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:01:08.192670  607523 logs.go:123] Gathering logs for container status ...
	I1213 12:01:08.192707  607523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:01:08.220898  607523 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000195913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1213 12:01:08.220962  607523 out.go:285] * 
	W1213 12:01:08.221021  607523 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000195913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 12:01:08.221042  607523 out.go:285] * 
	W1213 12:01:08.223167  607523 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 12:01:08.228262  607523 out.go:203] 
	W1213 12:01:08.230390  607523 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000195913s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1213 12:01:08.230436  607523 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1213 12:01:08.230456  607523 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1213 12:01:08.233619  607523 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 11:52:58 newest-cni-800979 crio[841]: time="2025-12-13T11:52:58.656750714Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 11:52:58 newest-cni-800979 crio[841]: time="2025-12-13T11:52:58.65679503Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 11:52:58 newest-cni-800979 crio[841]: time="2025-12-13T11:52:58.65686909Z" level=info msg="Create NRI interface"
	Dec 13 11:52:58 newest-cni-800979 crio[841]: time="2025-12-13T11:52:58.657011146Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 11:52:58 newest-cni-800979 crio[841]: time="2025-12-13T11:52:58.657027532Z" level=info msg="runtime interface created"
	Dec 13 11:52:58 newest-cni-800979 crio[841]: time="2025-12-13T11:52:58.65703938Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 11:52:58 newest-cni-800979 crio[841]: time="2025-12-13T11:52:58.657050458Z" level=info msg="runtime interface starting up..."
	Dec 13 11:52:58 newest-cni-800979 crio[841]: time="2025-12-13T11:52:58.657056603Z" level=info msg="starting plugins..."
	Dec 13 11:52:58 newest-cni-800979 crio[841]: time="2025-12-13T11:52:58.657071118Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 11:52:58 newest-cni-800979 crio[841]: time="2025-12-13T11:52:58.657157798Z" level=info msg="No systemd watchdog enabled"
	Dec 13 11:52:58 newest-cni-800979 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 13 11:53:00 newest-cni-800979 crio[841]: time="2025-12-13T11:53:00.658289681Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=44bde5a7-ef91-4bfc-b2de-9f916c14ea3c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:53:00 newest-cni-800979 crio[841]: time="2025-12-13T11:53:00.659003779Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=e31a0601-0e26-42cc-9404-dcdd39389cdb name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:53:00 newest-cni-800979 crio[841]: time="2025-12-13T11:53:00.65956494Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=ac873216-657d-4cc0-892e-00880e41eafa name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:53:00 newest-cni-800979 crio[841]: time="2025-12-13T11:53:00.65999591Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=530a0292-db1c-43ef-859f-467c374fb0aa name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:53:00 newest-cni-800979 crio[841]: time="2025-12-13T11:53:00.660429193Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=a9a7e7c8-e21b-4849-b310-763c391d55ad name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:53:00 newest-cni-800979 crio[841]: time="2025-12-13T11:53:00.660878797Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=091f4636-0f31-494b-b2e3-c60ba6c5537e name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:53:00 newest-cni-800979 crio[841]: time="2025-12-13T11:53:00.661381094Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=4d39732e-132e-4aab-83a7-bf35ce936d10 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:57:05 newest-cni-800979 crio[841]: time="2025-12-13T11:57:05.736588087Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=a83c8ed6-f494-4ce0-badc-348aab186d95 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:57:05 newest-cni-800979 crio[841]: time="2025-12-13T11:57:05.737233081Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=d6def276-17b4-41a9-b735-a72e840419d1 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:57:05 newest-cni-800979 crio[841]: time="2025-12-13T11:57:05.737730291Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=1dbb7a1b-99e5-4954-8a8f-33c6f4482cb4 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:57:05 newest-cni-800979 crio[841]: time="2025-12-13T11:57:05.738159423Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=a8c3fac2-6ffe-4483-991e-866f2c39acf7 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:57:05 newest-cni-800979 crio[841]: time="2025-12-13T11:57:05.738635382Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=b4e624e1-03da-40a9-aa60-b9cb1b62a27c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:57:05 newest-cni-800979 crio[841]: time="2025-12-13T11:57:05.739062077Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=43555a22-dfd3-4770-a890-ee016b44ec91 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 11:57:05 newest-cni-800979 crio[841]: time="2025-12-13T11:57:05.73979069Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=91be6437-ff89-42db-9528-f454720eb4de name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:02:47.163005    6003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:02:47.163942    6003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:02:47.165750    6003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:02:47.166339    6003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:02:47.168036    6003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	[Dec13 11:46] overlayfs: idmapped layers are currently not supported
	[ +24.639766] overlayfs: idmapped layers are currently not supported
	[ +18.732422] overlayfs: idmapped layers are currently not supported
	[Dec13 11:47] overlayfs: idmapped layers are currently not supported
	[Dec13 11:48] overlayfs: idmapped layers are currently not supported
	[Dec13 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.618483] overlayfs: idmapped layers are currently not supported
	[Dec13 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.749488] overlayfs: idmapped layers are currently not supported
	[Dec13 11:52] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 12:02:47 up  3:45,  0 user,  load average: 0.38, 0.80, 1.44
	Linux newest-cni-800979 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 12:02:44 newest-cni-800979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:02:45 newest-cni-800979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 769.
	Dec 13 12:02:45 newest-cni-800979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:02:45 newest-cni-800979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:02:45 newest-cni-800979 kubelet[5889]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:02:45 newest-cni-800979 kubelet[5889]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:02:45 newest-cni-800979 kubelet[5889]: E1213 12:02:45.329817    5889 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:02:45 newest-cni-800979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:02:45 newest-cni-800979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:02:46 newest-cni-800979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 770.
	Dec 13 12:02:46 newest-cni-800979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:02:46 newest-cni-800979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:02:46 newest-cni-800979 kubelet[5900]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:02:46 newest-cni-800979 kubelet[5900]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:02:46 newest-cni-800979 kubelet[5900]: E1213 12:02:46.088790    5900 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:02:46 newest-cni-800979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:02:46 newest-cni-800979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:02:46 newest-cni-800979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 771.
	Dec 13 12:02:46 newest-cni-800979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:02:46 newest-cni-800979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:02:46 newest-cni-800979 kubelet[5926]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:02:46 newest-cni-800979 kubelet[5926]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:02:46 newest-cni-800979 kubelet[5926]: E1213 12:02:46.847014    5926 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:02:46 newest-cni-800979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:02:46 newest-cni-800979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-800979 -n newest-cni-800979
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-800979 -n newest-cni-800979: exit status 6 (359.456589ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 12:02:47.681974  620501 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-800979" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-800979" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (97.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (376.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-800979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-800979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 105 (6m10.48154274s)

                                                
                                                
-- stdout --
	* [newest-cni-800979] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-800979" primary control-plane node in "newest-cni-800979" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 12:02:49.233504  620795 out.go:360] Setting OutFile to fd 1 ...
	I1213 12:02:49.233644  620795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:02:49.233655  620795 out.go:374] Setting ErrFile to fd 2...
	I1213 12:02:49.233660  620795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:02:49.233910  620795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 12:02:49.234294  620795 out.go:368] Setting JSON to false
	I1213 12:02:49.235159  620795 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13522,"bootTime":1765613848,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 12:02:49.235231  620795 start.go:143] virtualization:  
	I1213 12:02:49.240415  620795 out.go:179] * [newest-cni-800979] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 12:02:49.243444  620795 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 12:02:49.243505  620795 notify.go:221] Checking for updates...
	I1213 12:02:49.249923  620795 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 12:02:49.252821  620795 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:02:49.255716  620795 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 12:02:49.258605  620795 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 12:02:49.261497  620795 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 12:02:49.264842  620795 config.go:182] Loaded profile config "newest-cni-800979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:02:49.265447  620795 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 12:02:49.298976  620795 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 12:02:49.299102  620795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:02:49.360087  620795 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 12:02:49.350373468 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:02:49.360195  620795 docker.go:319] overlay module found
	I1213 12:02:49.363607  620795 out.go:179] * Using the docker driver based on existing profile
	I1213 12:02:49.366432  620795 start.go:309] selected driver: docker
	I1213 12:02:49.366449  620795 start.go:927] validating driver "docker" against &{Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:02:49.366561  620795 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 12:02:49.367304  620795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:02:49.420058  620795 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 12:02:49.411076686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:02:49.420394  620795 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 12:02:49.420426  620795 cni.go:84] Creating CNI manager for ""
	I1213 12:02:49.420475  620795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 12:02:49.420519  620795 start.go:353] cluster config:
	{Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:02:49.425561  620795 out.go:179] * Starting "newest-cni-800979" primary control-plane node in "newest-cni-800979" cluster
	I1213 12:02:49.428357  620795 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 12:02:49.431401  620795 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 12:02:49.434172  620795 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 12:02:49.434226  620795 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1213 12:02:49.434240  620795 cache.go:65] Caching tarball of preloaded images
	I1213 12:02:49.434255  620795 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 12:02:49.434334  620795 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 12:02:49.434345  620795 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 12:02:49.434462  620795 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json ...
	I1213 12:02:49.454054  620795 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 12:02:49.454078  620795 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 12:02:49.454100  620795 cache.go:243] Successfully downloaded all kic artifacts
	I1213 12:02:49.454140  620795 start.go:360] acquireMachinesLock for newest-cni-800979: {Name:mk98646479cdf6b123b7b6024833c6594650d415 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:02:49.454209  620795 start.go:364] duration metric: took 40.944µs to acquireMachinesLock for "newest-cni-800979"
	I1213 12:02:49.454234  620795 start.go:96] Skipping create...Using existing machine configuration
	I1213 12:02:49.454240  620795 fix.go:54] fixHost starting: 
	I1213 12:02:49.454523  620795 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 12:02:49.472085  620795 fix.go:112] recreateIfNeeded on newest-cni-800979: state=Stopped err=<nil>
	W1213 12:02:49.472121  620795 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 12:02:49.475488  620795 out.go:252] * Restarting existing docker container for "newest-cni-800979" ...
	I1213 12:02:49.475615  620795 cli_runner.go:164] Run: docker start newest-cni-800979
	I1213 12:02:49.733455  620795 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 12:02:49.759707  620795 kic.go:430] container "newest-cni-800979" state is running.
	I1213 12:02:49.760119  620795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 12:02:49.786102  620795 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/config.json ...
	I1213 12:02:49.786339  620795 machine.go:94] provisionDockerMachine start ...
	I1213 12:02:49.786415  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:49.818261  620795 main.go:143] libmachine: Using SSH client type: native
	I1213 12:02:49.818584  620795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1213 12:02:49.818599  620795 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 12:02:49.819284  620795 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 12:02:52.971159  620795 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-800979
	
	I1213 12:02:52.971183  620795 ubuntu.go:182] provisioning hostname "newest-cni-800979"
	I1213 12:02:52.971255  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:52.989036  620795 main.go:143] libmachine: Using SSH client type: native
	I1213 12:02:52.989363  620795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1213 12:02:52.989383  620795 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-800979 && echo "newest-cni-800979" | sudo tee /etc/hostname
	I1213 12:02:53.149336  620795 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-800979
	
	I1213 12:02:53.149444  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:53.167147  620795 main.go:143] libmachine: Using SSH client type: native
	I1213 12:02:53.167461  620795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1213 12:02:53.167485  620795 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-800979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-800979/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-800979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 12:02:53.315867  620795 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 12:02:53.315938  620795 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 12:02:53.315974  620795 ubuntu.go:190] setting up certificates
	I1213 12:02:53.316007  620795 provision.go:84] configureAuth start
	I1213 12:02:53.316088  620795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 12:02:53.333295  620795 provision.go:143] copyHostCerts
	I1213 12:02:53.333374  620795 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 12:02:53.333389  620795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 12:02:53.333473  620795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 12:02:53.333584  620795 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 12:02:53.333595  620795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 12:02:53.333624  620795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 12:02:53.333688  620795 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 12:02:53.333695  620795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 12:02:53.333721  620795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 12:02:53.333777  620795 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.newest-cni-800979 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-800979]
	I1213 12:02:53.395970  620795 provision.go:177] copyRemoteCerts
	I1213 12:02:53.396040  620795 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 12:02:53.396087  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:53.418352  620795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 12:02:53.528800  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 12:02:53.552957  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 12:02:53.574405  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 12:02:53.600909  620795 provision.go:87] duration metric: took 284.882424ms to configureAuth
	I1213 12:02:53.600947  620795 ubuntu.go:206] setting minikube options for container-runtime
	I1213 12:02:53.601229  620795 config.go:182] Loaded profile config "newest-cni-800979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:02:53.601368  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:53.620841  620795 main.go:143] libmachine: Using SSH client type: native
	I1213 12:02:53.621175  620795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1213 12:02:53.621196  620795 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 12:02:53.931497  620795 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 12:02:53.931546  620795 machine.go:97] duration metric: took 4.145187533s to provisionDockerMachine
	I1213 12:02:53.931564  620795 start.go:293] postStartSetup for "newest-cni-800979" (driver="docker")
	I1213 12:02:53.931581  620795 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 12:02:53.931661  620795 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 12:02:53.931721  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:53.951503  620795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 12:02:54.064288  620795 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 12:02:54.068121  620795 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 12:02:54.068153  620795 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 12:02:54.068165  620795 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 12:02:54.068219  620795 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 12:02:54.068306  620795 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 12:02:54.068414  620795 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 12:02:54.076516  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 12:02:54.095247  620795 start.go:296] duration metric: took 163.663698ms for postStartSetup
	I1213 12:02:54.095344  620795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 12:02:54.095390  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:54.113108  620795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 12:02:54.216773  620795 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 12:02:54.221562  620795 fix.go:56] duration metric: took 4.76731447s for fixHost
	I1213 12:02:54.221592  620795 start.go:83] releasing machines lock for "newest-cni-800979", held for 4.767370191s
	I1213 12:02:54.221679  620795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-800979
	I1213 12:02:54.239044  620795 ssh_runner.go:195] Run: cat /version.json
	I1213 12:02:54.239115  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:54.239379  620795 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 12:02:54.239436  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:54.257600  620795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 12:02:54.258181  620795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 12:02:54.472852  620795 ssh_runner.go:195] Run: systemctl --version
	I1213 12:02:54.480091  620795 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 12:02:54.517083  620795 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 12:02:54.521766  620795 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 12:02:54.521872  620795 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 12:02:54.530083  620795 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 12:02:54.530110  620795 start.go:496] detecting cgroup driver to use...
	I1213 12:02:54.530144  620795 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 12:02:54.530193  620795 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 12:02:54.545964  620795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 12:02:54.559256  620795 docker.go:218] disabling cri-docker service (if available) ...
	I1213 12:02:54.559343  620795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 12:02:54.575678  620795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 12:02:54.589520  620795 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 12:02:54.710146  620795 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 12:02:54.827028  620795 docker.go:234] disabling docker service ...
	I1213 12:02:54.827095  620795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 12:02:54.842094  620795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 12:02:54.855410  620795 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 12:02:54.972511  620795 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 12:02:55.125284  620795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 12:02:55.138359  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 12:02:55.153286  620795 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 12:02:55.153415  620795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:02:55.163260  620795 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 12:02:55.163390  620795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:02:55.174114  620795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:02:55.184426  620795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:02:55.194168  620795 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 12:02:55.203273  620795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:02:55.213465  620795 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:02:55.223135  620795 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:02:55.232693  620795 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 12:02:55.241786  620795 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 12:02:55.250375  620795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:02:55.372259  620795 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 12:02:55.566896  620795 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 12:02:55.566968  620795 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 12:02:55.570910  620795 start.go:564] Will wait 60s for crictl version
	I1213 12:02:55.570982  620795 ssh_runner.go:195] Run: which crictl
	I1213 12:02:55.574692  620795 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 12:02:55.599155  620795 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 12:02:55.599241  620795 ssh_runner.go:195] Run: crio --version
	I1213 12:02:55.632146  620795 ssh_runner.go:195] Run: crio --version
	I1213 12:02:55.666590  620795 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 12:02:55.669593  620795 cli_runner.go:164] Run: docker network inspect newest-cni-800979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 12:02:55.685409  620795 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1213 12:02:55.689403  620795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:02:55.701909  620795 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 12:02:55.704755  620795 kubeadm.go:884] updating cluster {Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 12:02:55.704897  620795 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 12:02:55.704972  620795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 12:02:55.736637  620795 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 12:02:55.736659  620795 crio.go:433] Images already preloaded, skipping extraction
	I1213 12:02:55.736712  620795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 12:02:55.768016  620795 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 12:02:55.768037  620795 cache_images.go:86] Images are preloaded, skipping loading
	I1213 12:02:55.768046  620795 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 12:02:55.768149  620795 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-800979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 12:02:55.768237  620795 ssh_runner.go:195] Run: crio config
	I1213 12:02:55.851308  620795 cni.go:84] Creating CNI manager for ""
	I1213 12:02:55.851342  620795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 12:02:55.851379  620795 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 12:02:55.851413  620795 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-800979 NodeName:newest-cni-800979 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 12:02:55.851664  620795 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-800979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 12:02:55.852108  620795 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 12:02:55.864605  620795 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 12:02:55.864684  620795 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 12:02:55.872619  620795 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 12:02:55.885648  620795 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 12:02:55.898455  620795 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1213 12:02:55.911158  620795 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1213 12:02:55.914686  620795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:02:55.924267  620795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:02:56.039465  620795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:02:56.056121  620795 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979 for IP: 192.168.94.2
	I1213 12:02:56.056196  620795 certs.go:195] generating shared ca certs ...
	I1213 12:02:56.056229  620795 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:02:56.056418  620795 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 12:02:56.056488  620795 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 12:02:56.056512  620795 certs.go:257] generating profile certs ...
	I1213 12:02:56.056675  620795 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/client.key
	I1213 12:02:56.056781  620795 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key.e5aab606
	I1213 12:02:56.056855  620795 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key
	I1213 12:02:56.057048  620795 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 12:02:56.057114  620795 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 12:02:56.057138  620795 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 12:02:56.057199  620795 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 12:02:56.057251  620795 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 12:02:56.057311  620795 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 12:02:56.057397  620795 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 12:02:56.058029  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 12:02:56.076025  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 12:02:56.093944  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 12:02:56.111895  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 12:02:56.130506  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 12:02:56.150046  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 12:02:56.168049  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 12:02:56.185529  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/newest-cni-800979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 12:02:56.203246  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 12:02:56.222133  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 12:02:56.239211  620795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 12:02:56.256762  620795 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 12:02:56.269543  620795 ssh_runner.go:195] Run: openssl version
	I1213 12:02:56.276330  620795 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:02:56.283718  620795 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 12:02:56.291258  620795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:02:56.295162  620795 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:02:56.295230  620795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:02:56.336913  620795 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 12:02:56.344459  620795 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 12:02:56.351711  620795 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 12:02:56.359139  620795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 12:02:56.362888  620795 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 12:02:56.362953  620795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 12:02:56.404066  620795 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 12:02:56.411918  620795 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 12:02:56.419334  620795 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 12:02:56.427027  620795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 12:02:56.430803  620795 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 12:02:56.430872  620795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 12:02:56.472238  620795 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 12:02:56.479691  620795 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 12:02:56.483350  620795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 12:02:56.525393  620795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 12:02:56.566931  620795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 12:02:56.609192  620795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 12:02:56.652105  620795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 12:02:56.693040  620795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 12:02:56.733855  620795 kubeadm.go:401] StartCluster: {Name:newest-cni-800979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-800979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:02:56.733948  620795 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 12:02:56.734009  620795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 12:02:56.761731  620795 cri.go:89] found id: ""
	I1213 12:02:56.761801  620795 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 12:02:56.769616  620795 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 12:02:56.769635  620795 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 12:02:56.769685  620795 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 12:02:56.777201  620795 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 12:02:56.777595  620795 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-800979" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:02:56.777697  620795 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-354468/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-800979" cluster setting kubeconfig missing "newest-cni-800979" context setting]
	I1213 12:02:56.777977  620795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:02:56.779229  620795 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 12:02:56.788235  620795 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1213 12:02:56.788267  620795 kubeadm.go:602] duration metric: took 18.626939ms to restartPrimaryControlPlane
	I1213 12:02:56.788277  620795 kubeadm.go:403] duration metric: took 54.43387ms to StartCluster
	I1213 12:02:56.788293  620795 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:02:56.788354  620795 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:02:56.788977  620795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:02:56.789180  620795 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 12:02:56.789467  620795 config.go:182] Loaded profile config "newest-cni-800979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:02:56.789509  620795 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 12:02:56.789573  620795 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-800979"
	I1213 12:02:56.789587  620795 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-800979"
	I1213 12:02:56.789612  620795 host.go:66] Checking if "newest-cni-800979" exists ...
	I1213 12:02:56.789613  620795 addons.go:70] Setting dashboard=true in profile "newest-cni-800979"
	I1213 12:02:56.789672  620795 addons.go:239] Setting addon dashboard=true in "newest-cni-800979"
	W1213 12:02:56.789696  620795 addons.go:248] addon dashboard should already be in state true
	I1213 12:02:56.789736  620795 host.go:66] Checking if "newest-cni-800979" exists ...
	I1213 12:02:56.790085  620795 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 12:02:56.790244  620795 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 12:02:56.792126  620795 addons.go:70] Setting default-storageclass=true in profile "newest-cni-800979"
	I1213 12:02:56.792166  620795 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-800979"
	I1213 12:02:56.792556  620795 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 12:02:56.795330  620795 out.go:179] * Verifying Kubernetes components...
	I1213 12:02:56.801331  620795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:02:56.852170  620795 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 12:02:56.854454  620795 addons.go:239] Setting addon default-storageclass=true in "newest-cni-800979"
	I1213 12:02:56.854495  620795 host.go:66] Checking if "newest-cni-800979" exists ...
	I1213 12:02:56.854919  620795 cli_runner.go:164] Run: docker container inspect newest-cni-800979 --format={{.State.Status}}
	I1213 12:02:56.855145  620795 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:02:56.855170  620795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 12:02:56.855216  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:56.855504  620795 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 12:02:56.858456  620795 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 12:02:56.862056  620795 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 12:02:56.862082  620795 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 12:02:56.862152  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:56.899687  620795 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 12:02:56.899709  620795 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 12:02:56.899772  620795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-800979
	I1213 12:02:56.927481  620795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 12:02:56.946897  620795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 12:02:56.963888  620795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/newest-cni-800979/id_rsa Username:docker}
	I1213 12:02:57.047178  620795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:02:57.085323  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:02:57.106670  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 12:02:57.109541  620795 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 12:02:57.109565  620795 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 12:02:57.129099  620795 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 12:02:57.129124  620795 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 12:02:57.143152  620795 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 12:02:57.143228  620795 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 12:02:57.157748  620795 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 12:02:57.157771  620795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 12:02:57.171682  620795 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 12:02:57.171707  620795 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 12:02:57.204203  620795 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 12:02:57.204229  620795 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 12:02:57.216958  620795 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 12:02:57.216983  620795 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 12:02:57.231106  620795 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 12:02:57.231131  620795 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 12:02:57.244346  620795 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:02:57.244370  620795 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 12:02:57.257080  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:02:57.772618  620795 api_server.go:52] waiting for apiserver process to appear ...
	I1213 12:02:57.772982  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:02:57.772808  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:02:57.773078  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:57.773118  620795 retry.go:31] will retry after 217.005737ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:57.773170  620795 retry.go:31] will retry after 239.962871ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:02:57.772882  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:57.773194  620795 retry.go:31] will retry after 147.663386ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:57.921773  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:02:57.978386  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:57.978421  620795 retry.go:31] will retry after 228.081406ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:57.990577  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:02:58.014070  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:02:58.127933  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:58.128026  620795 retry.go:31] will retry after 373.102827ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:02:58.127984  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:58.128061  620795 retry.go:31] will retry after 369.212229ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:58.207107  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:02:58.267352  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:58.267385  620795 retry.go:31] will retry after 334.48336ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:58.273686  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:02:58.497842  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:02:58.501298  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:02:58.602795  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:02:58.629431  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:58.629513  620795 retry.go:31] will retry after 680.299436ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:02:58.629708  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:58.629741  620795 retry.go:31] will retry after 376.262259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:02:58.684645  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:58.684691  620795 retry.go:31] will retry after 1.200875286s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:58.773900  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:02:59.007198  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:02:59.067125  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.067165  620795 retry.go:31] will retry after 592.59933ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.273796  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:02:59.310724  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:02:59.374429  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.374460  620795 retry.go:31] will retry after 1.123869523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.660188  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:02:59.746796  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.746834  620795 retry.go:31] will retry after 827.424249ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.773951  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:02:59.886643  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:02:59.984018  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.984054  620795 retry.go:31] will retry after 1.031600228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:00.289311  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:00.498512  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:00.574703  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:00.609412  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:00.609443  620795 retry.go:31] will retry after 1.594897337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:00.654022  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:00.654055  620795 retry.go:31] will retry after 1.847551508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:00.773391  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:01.016343  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:01.149191  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:01.149241  620795 retry.go:31] will retry after 1.156400239s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:01.273296  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:01.773106  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:02.204552  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:02.273738  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:02.274099  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.274136  620795 retry.go:31] will retry after 1.092655081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.305854  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:02.368964  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.369001  620795 retry.go:31] will retry after 1.680740365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.502311  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:02.587589  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.587627  620795 retry.go:31] will retry after 1.930642019s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.773890  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:03.281133  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:03.367295  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:03.462797  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:03.462834  620795 retry.go:31] will retry after 1.480584037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:03.773095  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:04.050289  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:04.211663  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:04.211692  620795 retry.go:31] will retry after 4.628682765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:04.273235  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:04.518978  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:04.583937  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:04.583972  620795 retry.go:31] will retry after 4.359648713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:04.773380  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:04.944170  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:05.011259  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:05.011298  620795 retry.go:31] will retry after 2.730254551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:05.273717  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:05.773164  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:06.274023  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:06.773331  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:07.273766  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:07.742621  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:07.773999  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:07.885064  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:07.885095  620795 retry.go:31] will retry after 5.399825259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:08.273766  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:08.773645  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:08.841141  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:08.935930  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:08.935967  620795 retry.go:31] will retry after 8.567303782s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:08.944298  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:09.032112  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:09.032154  620795 retry.go:31] will retry after 7.715566724s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:09.273871  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:09.773704  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:10.273974  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:10.773144  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:11.273093  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:11.773168  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:12.273119  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:12.773938  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:13.274064  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:13.285062  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:13.346306  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.346338  620795 retry.go:31] will retry after 9.878335415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.773923  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:14.273845  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:14.773934  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:15.273954  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:15.774017  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:16.273243  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:16.748013  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:03:16.773600  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:16.899498  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.899555  620795 retry.go:31] will retry after 7.173965376s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:17.273146  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:17.504219  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:17.614341  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:17.614369  620795 retry.go:31] will retry after 8.805046452s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:17.773767  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:18.273931  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:18.773442  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:19.273647  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:19.773235  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:20.273783  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:20.774109  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:21.273100  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:21.774041  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:22.273187  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:22.773919  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:23.224947  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:23.273354  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:23.287102  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:23.287132  620795 retry.go:31] will retry after 17.975754277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:23.774029  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:24.073794  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:24.135298  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:24.135337  620795 retry.go:31] will retry after 17.719019377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:24.273481  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:24.773666  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:25.273142  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:25.773170  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:26.273652  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:26.420263  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:26.478183  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:26.478224  620795 retry.go:31] will retry after 20.903659468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:26.773685  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:27.273113  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:27.773126  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:28.273297  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:28.773524  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:29.273854  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:29.773973  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:30.273040  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:30.773142  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:31.273258  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:31.773723  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:32.274053  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:32.774024  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:33.273125  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:33.773200  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:34.273224  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:34.773126  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:35.273423  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:35.773837  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:36.273251  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:36.773088  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:37.273142  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:37.773099  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:38.273954  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:38.773678  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:39.273565  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:39.773916  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:40.274028  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:40.773120  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:41.263107  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:41.273658  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:41.328103  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:41.328152  620795 retry.go:31] will retry after 24.557962123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:41.773949  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:41.855229  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:41.913722  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:41.913758  620795 retry.go:31] will retry after 29.657634591s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:42.273168  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:42.773137  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:43.273064  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:43.773040  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:44.273531  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:44.773694  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:45.273864  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:45.773153  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:46.273336  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:46.773222  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:47.273977  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:47.382145  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:47.444684  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:47.444761  620795 retry.go:31] will retry after 14.939941469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:47.773125  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:48.273113  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:48.773715  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:49.274132  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:49.773105  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:50.273278  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:50.773375  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:51.273108  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:51.773957  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:52.273086  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:52.773220  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:53.273134  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:53.773528  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:54.273748  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:54.773661  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:55.273945  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:55.773185  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:56.273156  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:56.773921  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:57.273352  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:03:57.273425  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:03:57.360759  620795 cri.go:89] found id: ""
	I1213 12:03:57.360784  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.360793  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:03:57.360799  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:03:57.360899  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:03:57.386673  620795 cri.go:89] found id: ""
	I1213 12:03:57.386699  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.386709  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:03:57.386715  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:03:57.386772  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:03:57.412179  620795 cri.go:89] found id: ""
	I1213 12:03:57.412202  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.412211  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:03:57.412217  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:03:57.412275  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:03:57.440758  620795 cri.go:89] found id: ""
	I1213 12:03:57.440782  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.440791  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:03:57.440797  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:03:57.440863  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:03:57.474164  620795 cri.go:89] found id: ""
	I1213 12:03:57.474189  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.474198  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:03:57.474205  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:03:57.474266  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:03:57.513790  620795 cri.go:89] found id: ""
	I1213 12:03:57.513811  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.513820  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:03:57.513826  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:03:57.513882  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:03:57.549685  620795 cri.go:89] found id: ""
	I1213 12:03:57.549708  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.549716  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:03:57.549723  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:03:57.549784  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:03:57.575809  620795 cri.go:89] found id: ""
	I1213 12:03:57.575830  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.575839  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:03:57.575848  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:03:57.575860  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:03:57.645191  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:03:57.645229  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:03:57.662016  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:03:57.662048  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:03:57.724395  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:03:57.715919    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.716483    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.718246    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.718931    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.720750    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:03:57.715919    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.716483    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.718246    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.718931    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.720750    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:03:57.724433  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:03:57.724446  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:03:57.752976  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:03:57.753012  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:00.282268  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:00.369064  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:00.369151  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:00.446224  620795 cri.go:89] found id: ""
	I1213 12:04:00.446257  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.446267  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:00.446274  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:00.446398  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:00.492701  620795 cri.go:89] found id: ""
	I1213 12:04:00.492728  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.492737  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:00.492744  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:00.492814  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:00.537493  620795 cri.go:89] found id: ""
	I1213 12:04:00.537573  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.537600  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:00.537617  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:00.537703  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:00.567417  620795 cri.go:89] found id: ""
	I1213 12:04:00.567457  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.567467  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:00.567493  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:00.567660  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:00.597259  620795 cri.go:89] found id: ""
	I1213 12:04:00.597333  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.597358  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:00.597371  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:00.597453  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:00.624935  620795 cri.go:89] found id: ""
	I1213 12:04:00.625008  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.625032  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:00.625053  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:00.625125  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:00.656802  620795 cri.go:89] found id: ""
	I1213 12:04:00.656830  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.656846  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:00.656853  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:00.656924  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:00.684243  620795 cri.go:89] found id: ""
	I1213 12:04:00.684318  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.684342  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:00.684364  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:00.684406  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:00.755205  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:00.755244  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:00.772314  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:00.772345  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:00.841157  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:00.832743    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.833321    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.835282    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.835830    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.836909    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:00.832743    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.833321    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.835282    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.835830    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.836909    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:00.841236  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:00.841257  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:00.870321  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:00.870357  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:02.384998  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:04:02.445321  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:02.445354  620795 retry.go:31] will retry after 47.283712675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:03.403559  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:03.414405  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:03.414472  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:03.440207  620795 cri.go:89] found id: ""
	I1213 12:04:03.440275  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.440299  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:03.440320  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:03.440406  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:03.473860  620795 cri.go:89] found id: ""
	I1213 12:04:03.473906  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.473916  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:03.473923  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:03.474005  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:03.500069  620795 cri.go:89] found id: ""
	I1213 12:04:03.500102  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.500111  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:03.500118  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:03.500194  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:03.550253  620795 cri.go:89] found id: ""
	I1213 12:04:03.550329  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.550353  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:03.550372  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:03.550459  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:03.595628  620795 cri.go:89] found id: ""
	I1213 12:04:03.595713  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.595737  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:03.595757  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:03.595871  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:03.626718  620795 cri.go:89] found id: ""
	I1213 12:04:03.626796  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.626827  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:03.626849  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:03.626954  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:03.657254  620795 cri.go:89] found id: ""
	I1213 12:04:03.657281  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.657290  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:03.657297  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:03.657356  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:03.682193  620795 cri.go:89] found id: ""
	I1213 12:04:03.682268  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.682292  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:03.682315  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:03.682355  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:03.750002  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:03.741882    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.742330    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.743987    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.744602    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.746402    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:03.741882    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.742330    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.743987    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.744602    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.746402    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:03.750025  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:03.750039  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:03.779008  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:03.779046  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:03.807344  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:03.807424  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:03.879158  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:03.879201  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:05.886355  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:04:05.944754  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:05.944842  620795 retry.go:31] will retry after 33.803790372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:06.397350  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:06.407918  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:06.407990  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:06.436013  620795 cri.go:89] found id: ""
	I1213 12:04:06.436040  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.436049  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:06.436056  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:06.436121  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:06.462051  620795 cri.go:89] found id: ""
	I1213 12:04:06.462074  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.462083  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:06.462089  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:06.462147  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:06.487916  620795 cri.go:89] found id: ""
	I1213 12:04:06.487943  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.487952  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:06.487959  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:06.488027  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:06.514150  620795 cri.go:89] found id: ""
	I1213 12:04:06.514181  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.514190  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:06.514196  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:06.514255  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:06.567862  620795 cri.go:89] found id: ""
	I1213 12:04:06.567900  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.567910  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:06.567917  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:06.567977  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:06.615399  620795 cri.go:89] found id: ""
	I1213 12:04:06.615428  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.615446  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:06.615453  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:06.615546  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:06.645078  620795 cri.go:89] found id: ""
	I1213 12:04:06.645150  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.645174  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:06.645196  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:06.645278  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:06.673976  620795 cri.go:89] found id: ""
	I1213 12:04:06.674002  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.674011  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:06.674022  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:06.674067  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:06.703467  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:06.703504  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:06.731693  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:06.731721  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:06.801110  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:06.801154  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:06.817774  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:06.817804  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:06.899087  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:06.890513    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.891812    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.893652    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.893965    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.895397    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:06.890513    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.891812    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.893652    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.893965    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.895397    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:09.400132  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:09.410430  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:09.410500  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:09.440067  620795 cri.go:89] found id: ""
	I1213 12:04:09.440090  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.440100  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:09.440107  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:09.440167  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:09.470041  620795 cri.go:89] found id: ""
	I1213 12:04:09.470062  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.470071  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:09.470078  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:09.470135  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:09.496421  620795 cri.go:89] found id: ""
	I1213 12:04:09.496444  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.496453  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:09.496459  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:09.496516  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:09.535210  620795 cri.go:89] found id: ""
	I1213 12:04:09.535233  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.535241  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:09.535248  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:09.535322  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:09.593867  620795 cri.go:89] found id: ""
	I1213 12:04:09.593894  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.593905  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:09.593912  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:09.593967  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:09.633869  620795 cri.go:89] found id: ""
	I1213 12:04:09.633895  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.633904  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:09.633911  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:09.633967  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:09.660082  620795 cri.go:89] found id: ""
	I1213 12:04:09.660104  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.660113  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:09.660119  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:09.660180  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:09.686975  620795 cri.go:89] found id: ""
	I1213 12:04:09.687005  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.687013  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:09.687023  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:09.687035  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:09.756960  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:09.756994  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:09.779895  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:09.779929  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:09.858208  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:09.850094    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.850752    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.852494    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.853050    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.854767    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:09.850094    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.850752    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.852494    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.853050    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.854767    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:09.858229  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:09.858243  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:09.886438  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:09.886472  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:11.571741  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:04:11.635299  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:11.635338  620795 retry.go:31] will retry after 28.848947099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:12.418247  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:12.428921  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:12.428996  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:12.453422  620795 cri.go:89] found id: ""
	I1213 12:04:12.453447  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.453455  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:12.453462  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:12.453523  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:12.482791  620795 cri.go:89] found id: ""
	I1213 12:04:12.482818  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.482827  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:12.482834  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:12.482892  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:12.509185  620795 cri.go:89] found id: ""
	I1213 12:04:12.509207  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.509216  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:12.509222  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:12.509281  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:12.555782  620795 cri.go:89] found id: ""
	I1213 12:04:12.555810  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.555820  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:12.555868  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:12.555953  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:12.609661  620795 cri.go:89] found id: ""
	I1213 12:04:12.609682  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.609691  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:12.609697  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:12.609753  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:12.636223  620795 cri.go:89] found id: ""
	I1213 12:04:12.636251  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.636268  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:12.636275  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:12.636335  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:12.663456  620795 cri.go:89] found id: ""
	I1213 12:04:12.663484  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.663493  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:12.663499  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:12.663583  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:12.688687  620795 cri.go:89] found id: ""
	I1213 12:04:12.688714  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.688723  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:12.688733  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:12.688745  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:12.705209  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:12.705240  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:12.766977  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:12.758035    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.758936    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.760623    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.761225    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.762917    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:12.758035    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.758936    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.760623    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.761225    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.762917    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:12.767041  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:12.767064  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:12.795358  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:12.795396  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:12.823112  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:12.823143  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:15.388432  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:15.398781  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:15.398905  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:15.425880  620795 cri.go:89] found id: ""
	I1213 12:04:15.425920  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.425929  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:15.425935  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:15.426005  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:15.451424  620795 cri.go:89] found id: ""
	I1213 12:04:15.451467  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.451477  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:15.451486  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:15.451583  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:15.476481  620795 cri.go:89] found id: ""
	I1213 12:04:15.476525  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.476534  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:15.476541  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:15.476612  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:15.502062  620795 cri.go:89] found id: ""
	I1213 12:04:15.502088  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.502097  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:15.502104  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:15.502173  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:15.588057  620795 cri.go:89] found id: ""
	I1213 12:04:15.588132  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.588155  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:15.588175  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:15.588279  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:15.616479  620795 cri.go:89] found id: ""
	I1213 12:04:15.616506  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.616519  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:15.616526  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:15.616602  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:15.649712  620795 cri.go:89] found id: ""
	I1213 12:04:15.649789  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.649813  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:15.649827  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:15.649912  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:15.675926  620795 cri.go:89] found id: ""
	I1213 12:04:15.675995  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.676019  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:15.676034  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:15.676049  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:15.692725  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:15.692755  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:15.759900  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:15.751635    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.752539    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.754270    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.754749    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.756378    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:15.751635    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.752539    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.754270    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.754749    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.756378    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:15.759963  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:15.759989  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:15.789315  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:15.789425  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:15.818647  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:15.818675  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:18.385812  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:18.396389  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:18.396461  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:18.422777  620795 cri.go:89] found id: ""
	I1213 12:04:18.422800  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.422808  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:18.422814  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:18.422873  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:18.448579  620795 cri.go:89] found id: ""
	I1213 12:04:18.448607  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.448616  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:18.448622  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:18.448677  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:18.474629  620795 cri.go:89] found id: ""
	I1213 12:04:18.474707  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.474744  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:18.474768  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:18.474859  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:18.499793  620795 cri.go:89] found id: ""
	I1213 12:04:18.499819  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.499828  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:18.499837  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:18.499894  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:18.531333  620795 cri.go:89] found id: ""
	I1213 12:04:18.531368  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.531377  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:18.531383  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:18.531450  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:18.583893  620795 cri.go:89] found id: ""
	I1213 12:04:18.583923  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.583932  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:18.583939  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:18.584008  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:18.620082  620795 cri.go:89] found id: ""
	I1213 12:04:18.620120  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.620129  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:18.620135  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:18.620210  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:18.647112  620795 cri.go:89] found id: ""
	I1213 12:04:18.647137  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.647145  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:18.647155  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:18.647167  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:18.712791  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:18.712833  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:18.728892  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:18.728920  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:18.793078  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:18.784898    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.785594    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.787226    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.787863    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.789553    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:18.784898    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.785594    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.787226    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.787863    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.789553    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:18.793150  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:18.793172  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:18.821911  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:18.821947  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:21.353995  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:21.364153  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:21.364265  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:21.389593  620795 cri.go:89] found id: ""
	I1213 12:04:21.389673  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.389690  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:21.389698  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:21.389773  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:21.418684  620795 cri.go:89] found id: ""
	I1213 12:04:21.418706  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.418715  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:21.418722  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:21.418778  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:21.442724  620795 cri.go:89] found id: ""
	I1213 12:04:21.442799  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.442822  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:21.442841  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:21.442927  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:21.472117  620795 cri.go:89] found id: ""
	I1213 12:04:21.472141  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.472150  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:21.472156  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:21.472213  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:21.501589  620795 cri.go:89] found id: ""
	I1213 12:04:21.501612  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.501621  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:21.501627  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:21.501688  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:21.563954  620795 cri.go:89] found id: ""
	I1213 12:04:21.564023  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.564046  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:21.564069  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:21.564151  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:21.612229  620795 cri.go:89] found id: ""
	I1213 12:04:21.612263  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.612273  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:21.612280  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:21.612339  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:21.639602  620795 cri.go:89] found id: ""
	I1213 12:04:21.639636  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.639645  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:21.639655  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:21.639669  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:21.705516  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:21.705552  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:21.722491  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:21.722521  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:21.783641  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:21.775744    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.776319    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.777813    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.778191    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.779744    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:21.775744    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.776319    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.777813    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.778191    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.779744    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:21.783663  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:21.783676  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:21.811307  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:21.811340  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:24.340508  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:24.351403  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:24.351482  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:24.382302  620795 cri.go:89] found id: ""
	I1213 12:04:24.382379  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.382404  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:24.382425  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:24.382538  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:24.408839  620795 cri.go:89] found id: ""
	I1213 12:04:24.408862  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.408871  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:24.408878  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:24.408936  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:24.435623  620795 cri.go:89] found id: ""
	I1213 12:04:24.435651  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.435661  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:24.435667  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:24.435727  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:24.461121  620795 cri.go:89] found id: ""
	I1213 12:04:24.461149  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.461158  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:24.461165  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:24.461251  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:24.486111  620795 cri.go:89] found id: ""
	I1213 12:04:24.486144  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.486153  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:24.486176  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:24.486257  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:24.511493  620795 cri.go:89] found id: ""
	I1213 12:04:24.511567  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.511578  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:24.511585  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:24.511646  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:24.546004  620795 cri.go:89] found id: ""
	I1213 12:04:24.546029  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.546052  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:24.546059  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:24.546129  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:24.573601  620795 cri.go:89] found id: ""
	I1213 12:04:24.573677  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.573699  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:24.573720  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:24.573758  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:24.651738  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:24.651779  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:24.669002  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:24.669035  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:24.734744  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:24.726695    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.727312    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.729032    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.729495    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.731022    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:24.726695    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.727312    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.729032    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.729495    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.731022    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:24.734767  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:24.734780  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:24.763652  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:24.763687  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:27.296287  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:27.306558  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:27.306632  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:27.331288  620795 cri.go:89] found id: ""
	I1213 12:04:27.331315  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.331324  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:27.331331  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:27.331388  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:27.357587  620795 cri.go:89] found id: ""
	I1213 12:04:27.357611  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.357620  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:27.357626  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:27.357681  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:27.383604  620795 cri.go:89] found id: ""
	I1213 12:04:27.383628  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.383637  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:27.383644  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:27.383699  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:27.408104  620795 cri.go:89] found id: ""
	I1213 12:04:27.408183  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.408199  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:27.408207  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:27.408273  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:27.434284  620795 cri.go:89] found id: ""
	I1213 12:04:27.434309  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.434318  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:27.434325  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:27.434389  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:27.459356  620795 cri.go:89] found id: ""
	I1213 12:04:27.459382  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.459391  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:27.459399  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:27.459457  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:27.484476  620795 cri.go:89] found id: ""
	I1213 12:04:27.484543  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.484558  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:27.484565  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:27.484630  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:27.510910  620795 cri.go:89] found id: ""
	I1213 12:04:27.510937  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.510946  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:27.510955  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:27.510967  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:27.543054  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:27.543085  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:27.641750  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:27.634259    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.634796    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.636509    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.637087    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.638180    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:27.634259    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.634796    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.636509    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.637087    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.638180    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:27.641818  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:27.641838  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:27.671375  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:27.671412  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:27.701704  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:27.701735  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:30.268871  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:30.279472  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:30.279561  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:30.305479  620795 cri.go:89] found id: ""
	I1213 12:04:30.305504  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.305513  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:30.305520  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:30.305577  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:30.330879  620795 cri.go:89] found id: ""
	I1213 12:04:30.330904  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.330914  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:30.330920  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:30.330978  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:30.358794  620795 cri.go:89] found id: ""
	I1213 12:04:30.358821  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.358830  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:30.358837  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:30.358899  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:30.384574  620795 cri.go:89] found id: ""
	I1213 12:04:30.384648  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.384662  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:30.384669  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:30.384728  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:30.409348  620795 cri.go:89] found id: ""
	I1213 12:04:30.409374  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.409383  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:30.409390  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:30.409460  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:30.435261  620795 cri.go:89] found id: ""
	I1213 12:04:30.435286  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.435295  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:30.435302  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:30.435357  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:30.459810  620795 cri.go:89] found id: ""
	I1213 12:04:30.459834  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.459843  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:30.459849  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:30.459906  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:30.485697  620795 cri.go:89] found id: ""
	I1213 12:04:30.485720  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.485728  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:30.485738  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:30.485749  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:30.513499  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:30.513534  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:30.574739  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:30.574767  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:30.658042  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:30.658078  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:30.678263  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:30.678291  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:30.741695  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:30.733736    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.734524    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.736026    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.736488    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.737955    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:30.733736    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.734524    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.736026    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.736488    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.737955    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:33.242096  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:33.253053  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:33.253146  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:33.279722  620795 cri.go:89] found id: ""
	I1213 12:04:33.279748  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.279756  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:33.279764  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:33.279820  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:33.306092  620795 cri.go:89] found id: ""
	I1213 12:04:33.306129  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.306139  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:33.306163  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:33.306252  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:33.332772  620795 cri.go:89] found id: ""
	I1213 12:04:33.332796  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.332813  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:33.332819  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:33.332882  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:33.367716  620795 cri.go:89] found id: ""
	I1213 12:04:33.367744  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.367754  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:33.367760  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:33.367822  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:33.400175  620795 cri.go:89] found id: ""
	I1213 12:04:33.400242  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.400258  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:33.400266  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:33.400325  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:33.424852  620795 cri.go:89] found id: ""
	I1213 12:04:33.424877  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.424887  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:33.424894  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:33.424984  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:33.453556  620795 cri.go:89] found id: ""
	I1213 12:04:33.453581  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.453590  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:33.453597  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:33.453653  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:33.479131  620795 cri.go:89] found id: ""
	I1213 12:04:33.479156  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.479165  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:33.479175  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:33.479187  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:33.549906  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:33.550637  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:33.572706  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:33.572863  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:33.662497  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:33.653770    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.654281    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.656228    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.656866    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.658492    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:33.653770    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.654281    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.656228    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.656866    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.658492    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:33.662522  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:33.662535  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:33.692067  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:33.692111  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:36.220187  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:36.230829  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:36.230906  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:36.260247  620795 cri.go:89] found id: ""
	I1213 12:04:36.260271  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.260280  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:36.260286  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:36.260342  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:36.285940  620795 cri.go:89] found id: ""
	I1213 12:04:36.285973  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.285982  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:36.285988  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:36.286059  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:36.311531  620795 cri.go:89] found id: ""
	I1213 12:04:36.311553  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.311561  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:36.311568  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:36.311633  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:36.336755  620795 cri.go:89] found id: ""
	I1213 12:04:36.336849  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.336865  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:36.336873  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:36.336933  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:36.361652  620795 cri.go:89] found id: ""
	I1213 12:04:36.361676  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.361684  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:36.361690  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:36.361748  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:36.392507  620795 cri.go:89] found id: ""
	I1213 12:04:36.392530  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.392539  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:36.392545  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:36.392601  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:36.418503  620795 cri.go:89] found id: ""
	I1213 12:04:36.418526  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.418535  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:36.418540  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:36.418614  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:36.444832  620795 cri.go:89] found id: ""
	I1213 12:04:36.444856  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.444865  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:36.444874  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:36.444891  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:36.515523  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:36.515566  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:36.535671  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:36.535699  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:36.655383  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:36.646224    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.647083    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.648816    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.649375    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.651021    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:36.646224    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.647083    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.648816    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.649375    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.651021    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:36.655406  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:36.655421  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:36.684176  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:36.684212  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:39.215366  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:39.225843  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:39.225914  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:39.251825  620795 cri.go:89] found id: ""
	I1213 12:04:39.251850  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.251860  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:39.251867  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:39.251927  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:39.280966  620795 cri.go:89] found id: ""
	I1213 12:04:39.280991  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.281000  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:39.281007  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:39.281063  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:39.305488  620795 cri.go:89] found id: ""
	I1213 12:04:39.305511  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.305520  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:39.305526  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:39.305583  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:39.330461  620795 cri.go:89] found id: ""
	I1213 12:04:39.330484  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.330493  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:39.330500  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:39.330556  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:39.355410  620795 cri.go:89] found id: ""
	I1213 12:04:39.355483  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.355507  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:39.355565  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:39.355706  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:39.384890  620795 cri.go:89] found id: ""
	I1213 12:04:39.384916  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.384926  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:39.384933  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:39.385017  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:39.409735  620795 cri.go:89] found id: ""
	I1213 12:04:39.409758  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.409767  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:39.409773  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:39.409833  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:39.439648  620795 cri.go:89] found id: ""
	I1213 12:04:39.439673  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.439685  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:39.439695  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:39.439706  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:39.505768  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:39.505803  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:39.525572  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:39.525602  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:39.624619  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:39.616542    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.617459    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.619080    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.619382    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.620943    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:39.616542    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.617459    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.619080    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.619382    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.620943    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:39.624643  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:39.624656  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:39.653269  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:39.653306  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:39.749621  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:04:39.805957  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:39.806064  620795 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 12:04:40.484759  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:04:40.549677  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:40.549776  620795 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 12:04:42.182348  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:42.195718  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:42.195860  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:42.224999  620795 cri.go:89] found id: ""
	I1213 12:04:42.225044  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.225058  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:42.225067  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:42.225192  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:42.254835  620795 cri.go:89] found id: ""
	I1213 12:04:42.254913  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.254949  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:42.254975  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:42.255077  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:42.283814  620795 cri.go:89] found id: ""
	I1213 12:04:42.283889  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.283916  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:42.283931  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:42.284014  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:42.315795  620795 cri.go:89] found id: ""
	I1213 12:04:42.315823  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.315859  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:42.315871  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:42.315954  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:42.342987  620795 cri.go:89] found id: ""
	I1213 12:04:42.343026  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.343035  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:42.343042  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:42.343114  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:42.368935  620795 cri.go:89] found id: ""
	I1213 12:04:42.368969  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.368978  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:42.368986  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:42.369052  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:42.398633  620795 cri.go:89] found id: ""
	I1213 12:04:42.398703  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.398727  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:42.398747  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:42.398834  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:42.424223  620795 cri.go:89] found id: ""
	I1213 12:04:42.424299  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.424324  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:42.424342  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:42.424367  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:42.453160  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:42.453198  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:42.486810  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:42.486840  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:42.567003  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:42.567043  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:42.606556  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:42.606591  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:42.678272  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:42.669759    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.670194    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.671849    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.672446    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.673383    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:42.669759    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.670194    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.671849    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.672446    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.673383    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:45.178582  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:45.193685  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:45.193792  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:45.236374  620795 cri.go:89] found id: ""
	I1213 12:04:45.236402  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.236411  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:45.236419  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:45.236487  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:45.279160  620795 cri.go:89] found id: ""
	I1213 12:04:45.279193  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.279203  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:45.279210  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:45.279281  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:45.308966  620795 cri.go:89] found id: ""
	I1213 12:04:45.308991  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.309000  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:45.309006  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:45.309065  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:45.337083  620795 cri.go:89] found id: ""
	I1213 12:04:45.337110  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.337119  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:45.337126  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:45.337212  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:45.366596  620795 cri.go:89] found id: ""
	I1213 12:04:45.366619  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.366628  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:45.366635  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:45.366694  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:45.391548  620795 cri.go:89] found id: ""
	I1213 12:04:45.391572  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.391581  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:45.391588  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:45.391649  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:45.418598  620795 cri.go:89] found id: ""
	I1213 12:04:45.418619  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.418628  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:45.418635  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:45.418700  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:45.448270  620795 cri.go:89] found id: ""
	I1213 12:04:45.448292  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.448301  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:45.448310  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:45.448321  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:45.478882  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:45.478907  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:45.548829  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:45.548916  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:45.567213  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:45.567382  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:45.681775  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:45.673956    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.674517    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.676147    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.676639    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.678185    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:45.673956    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.674517    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.676147    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.676639    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.678185    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:45.681800  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:45.681816  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:48.211634  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:48.222293  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:48.222364  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:48.249683  620795 cri.go:89] found id: ""
	I1213 12:04:48.249707  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.249715  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:48.249722  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:48.249785  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:48.277977  620795 cri.go:89] found id: ""
	I1213 12:04:48.277999  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.278009  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:48.278015  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:48.278072  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:48.304052  620795 cri.go:89] found id: ""
	I1213 12:04:48.304080  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.304089  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:48.304096  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:48.304153  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:48.334039  620795 cri.go:89] found id: ""
	I1213 12:04:48.334066  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.334075  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:48.334087  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:48.334151  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:48.364623  620795 cri.go:89] found id: ""
	I1213 12:04:48.364646  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.364654  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:48.364661  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:48.364723  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:48.389613  620795 cri.go:89] found id: ""
	I1213 12:04:48.389684  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.389707  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:48.389718  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:48.389797  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:48.418439  620795 cri.go:89] found id: ""
	I1213 12:04:48.418467  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.418477  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:48.418485  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:48.418544  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:48.446312  620795 cri.go:89] found id: ""
	I1213 12:04:48.446341  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.446350  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:48.446360  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:48.446372  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:48.463031  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:48.463116  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:48.558736  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:48.546104    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.546489    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.550180    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.550521    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.554948    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:48.546104    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.546489    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.550180    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.550521    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.554948    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:48.558767  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:48.558782  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:48.606808  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:48.606885  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:48.638169  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:48.638199  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:49.729332  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:04:49.791669  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:49.791778  620795 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 12:04:49.794717  620795 out.go:179] * Enabled addons: 
	I1213 12:04:49.797659  620795 addons.go:530] duration metric: took 1m53.008142261s for enable addons: enabled=[]
	I1213 12:04:51.210580  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:51.221809  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:51.221877  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:51.247182  620795 cri.go:89] found id: ""
	I1213 12:04:51.247259  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.247282  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:51.247301  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:51.247396  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:51.275541  620795 cri.go:89] found id: ""
	I1213 12:04:51.275608  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.275623  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:51.275631  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:51.275695  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:51.300774  620795 cri.go:89] found id: ""
	I1213 12:04:51.300866  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.300889  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:51.300902  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:51.300973  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:51.330039  620795 cri.go:89] found id: ""
	I1213 12:04:51.330064  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.330074  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:51.330080  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:51.330152  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:51.358455  620795 cri.go:89] found id: ""
	I1213 12:04:51.358482  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.358491  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:51.358497  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:51.358556  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:51.387907  620795 cri.go:89] found id: ""
	I1213 12:04:51.387933  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.387942  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:51.387948  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:51.388011  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:51.414050  620795 cri.go:89] found id: ""
	I1213 12:04:51.414075  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.414084  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:51.414091  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:51.414148  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:51.440682  620795 cri.go:89] found id: ""
	I1213 12:04:51.440715  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.440729  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:51.440739  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:51.440752  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:51.502275  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:51.494090    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.494838    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.496561    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.497152    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.498687    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:51.494090    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.494838    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.496561    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.497152    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.498687    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:51.502296  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:51.502308  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:51.533683  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:51.533722  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:51.590439  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:51.590468  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:51.668678  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:51.668719  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:54.186166  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:54.196649  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:54.196718  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:54.221630  620795 cri.go:89] found id: ""
	I1213 12:04:54.221656  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.221665  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:54.221672  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:54.221729  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:54.246332  620795 cri.go:89] found id: ""
	I1213 12:04:54.246354  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.246362  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:54.246368  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:54.246425  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:54.274363  620795 cri.go:89] found id: ""
	I1213 12:04:54.274385  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.274396  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:54.274405  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:54.274465  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:54.299013  620795 cri.go:89] found id: ""
	I1213 12:04:54.299036  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.299045  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:54.299051  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:54.299115  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:54.325098  620795 cri.go:89] found id: ""
	I1213 12:04:54.325123  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.325133  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:54.325140  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:54.325200  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:54.350290  620795 cri.go:89] found id: ""
	I1213 12:04:54.350318  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.350327  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:54.350334  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:54.350394  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:54.377186  620795 cri.go:89] found id: ""
	I1213 12:04:54.377209  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.377218  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:54.377224  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:54.377283  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:54.409137  620795 cri.go:89] found id: ""
	I1213 12:04:54.409164  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.409174  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:54.409184  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:54.409196  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:54.426177  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:54.426207  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:54.491873  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:54.483806    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.484379    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.486107    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.486575    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.488295    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:54.483806    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.484379    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.486107    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.486575    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.488295    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:54.491896  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:54.491909  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:54.521061  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:54.521153  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:54.580593  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:54.580623  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:57.166168  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:57.177178  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:57.177255  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:57.209135  620795 cri.go:89] found id: ""
	I1213 12:04:57.209170  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.209179  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:57.209186  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:57.209254  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:57.236323  620795 cri.go:89] found id: ""
	I1213 12:04:57.236359  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.236368  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:57.236375  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:57.236433  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:57.261970  620795 cri.go:89] found id: ""
	I1213 12:04:57.261992  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.262001  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:57.262007  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:57.262064  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:57.287149  620795 cri.go:89] found id: ""
	I1213 12:04:57.287171  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.287179  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:57.287186  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:57.287242  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:57.312282  620795 cri.go:89] found id: ""
	I1213 12:04:57.312307  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.312316  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:57.312322  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:57.312380  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:57.341454  620795 cri.go:89] found id: ""
	I1213 12:04:57.341480  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.341489  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:57.341496  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:57.341559  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:57.366694  620795 cri.go:89] found id: ""
	I1213 12:04:57.366718  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.366729  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:57.366736  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:57.366795  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:57.392434  620795 cri.go:89] found id: ""
	I1213 12:04:57.392459  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.392468  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:57.392478  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:57.392490  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:57.426595  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:57.426622  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:57.490950  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:57.490984  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:57.508294  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:57.508326  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:57.637638  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:57.628307    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.629849    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.630282    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.632060    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.632815    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:57.628307    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.629849    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.630282    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.632060    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.632815    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:57.637717  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:57.637746  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:00.166037  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:00.211490  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:00.212114  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:00.294178  620795 cri.go:89] found id: ""
	I1213 12:05:00.294201  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.294210  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:00.294217  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:00.294285  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:00.376480  620795 cri.go:89] found id: ""
	I1213 12:05:00.376506  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.376516  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:00.376523  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:00.376593  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:00.416213  620795 cri.go:89] found id: ""
	I1213 12:05:00.416240  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.416250  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:00.416261  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:00.416329  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:00.449590  620795 cri.go:89] found id: ""
	I1213 12:05:00.449620  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.449629  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:00.449637  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:00.449722  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:00.479461  620795 cri.go:89] found id: ""
	I1213 12:05:00.479486  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.479495  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:00.479502  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:00.479589  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:00.509094  620795 cri.go:89] found id: ""
	I1213 12:05:00.509123  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.509132  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:00.509138  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:00.509204  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:00.583923  620795 cri.go:89] found id: ""
	I1213 12:05:00.583952  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.583962  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:00.583969  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:00.584049  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:00.624268  620795 cri.go:89] found id: ""
	I1213 12:05:00.624299  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.624309  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:00.624322  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:00.624334  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:00.701394  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:00.692593    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.693524    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.695465    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.695924    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.697491    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:00.692593    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.693524    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.695465    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.695924    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.697491    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:00.701419  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:00.701432  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:00.730125  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:00.730170  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:00.760465  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:00.760494  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:00.826577  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:00.826619  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:03.345642  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:03.359010  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:03.359082  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:03.391792  620795 cri.go:89] found id: ""
	I1213 12:05:03.391816  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.391825  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:03.391832  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:03.391889  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:03.418730  620795 cri.go:89] found id: ""
	I1213 12:05:03.418759  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.418768  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:03.418774  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:03.418831  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:03.447034  620795 cri.go:89] found id: ""
	I1213 12:05:03.447062  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.447070  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:03.447077  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:03.447137  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:03.471737  620795 cri.go:89] found id: ""
	I1213 12:05:03.471763  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.471772  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:03.471778  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:03.471832  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:03.496618  620795 cri.go:89] found id: ""
	I1213 12:05:03.496641  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.496650  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:03.496656  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:03.496721  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:03.538834  620795 cri.go:89] found id: ""
	I1213 12:05:03.538855  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.538901  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:03.538915  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:03.539006  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:03.577353  620795 cri.go:89] found id: ""
	I1213 12:05:03.577375  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.577437  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:03.577445  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:03.577590  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:03.613163  620795 cri.go:89] found id: ""
	I1213 12:05:03.613234  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.613247  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:03.613257  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:03.613296  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:03.652148  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:03.652174  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:03.718838  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:03.718879  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:03.736159  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:03.736189  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:03.801478  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:03.792834    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.793250    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.794944    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.795726    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.797245    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:03.792834    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.793250    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.794944    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.795726    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.797245    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:03.801504  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:03.801519  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:06.330711  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:06.341136  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:06.341246  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:06.366066  620795 cri.go:89] found id: ""
	I1213 12:05:06.366099  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.366108  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:06.366114  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:06.366178  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:06.394525  620795 cri.go:89] found id: ""
	I1213 12:05:06.394563  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.394573  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:06.394580  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:06.394649  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:06.424244  620795 cri.go:89] found id: ""
	I1213 12:05:06.424312  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.424336  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:06.424357  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:06.424449  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:06.450497  620795 cri.go:89] found id: ""
	I1213 12:05:06.450529  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.450538  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:06.450545  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:06.450614  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:06.475735  620795 cri.go:89] found id: ""
	I1213 12:05:06.475759  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.475768  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:06.475774  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:06.475835  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:06.501224  620795 cri.go:89] found id: ""
	I1213 12:05:06.501248  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.501257  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:06.501263  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:06.501322  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:06.548385  620795 cri.go:89] found id: ""
	I1213 12:05:06.548410  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.548419  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:06.548425  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:06.548498  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:06.613365  620795 cri.go:89] found id: ""
	I1213 12:05:06.613444  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.613469  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:06.613490  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:06.613525  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:06.642036  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:06.642067  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:06.675194  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:06.675218  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:06.743889  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:06.743933  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:06.760968  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:06.761004  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:06.828998  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:06.821066    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.821670    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.823321    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.823818    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.825418    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:06.821066    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.821670    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.823321    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.823818    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.825418    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:09.329981  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:09.340577  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:09.340644  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:09.368902  620795 cri.go:89] found id: ""
	I1213 12:05:09.368926  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.368935  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:09.368941  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:09.369004  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:09.397232  620795 cri.go:89] found id: ""
	I1213 12:05:09.397263  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.397273  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:09.397280  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:09.397353  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:09.424425  620795 cri.go:89] found id: ""
	I1213 12:05:09.424455  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.424465  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:09.424471  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:09.424529  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:09.449435  620795 cri.go:89] found id: ""
	I1213 12:05:09.449457  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.449466  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:09.449472  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:09.449534  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:09.473489  620795 cri.go:89] found id: ""
	I1213 12:05:09.473512  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.473521  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:09.473527  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:09.473584  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:09.503533  620795 cri.go:89] found id: ""
	I1213 12:05:09.503560  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.503569  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:09.503576  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:09.503632  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:09.569217  620795 cri.go:89] found id: ""
	I1213 12:05:09.569286  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.569312  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:09.569331  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:09.569431  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:09.616563  620795 cri.go:89] found id: ""
	I1213 12:05:09.616632  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.616663  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:09.616686  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:09.616726  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:09.645190  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:09.645217  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:09.710725  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:09.710760  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:09.727200  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:09.727231  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:09.793579  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:09.785934    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.786467    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.787974    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.788480    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.790090    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:09.785934    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.786467    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.787974    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.788480    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.790090    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:09.793611  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:09.793625  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:12.321617  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:12.332442  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:12.332517  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:12.357812  620795 cri.go:89] found id: ""
	I1213 12:05:12.357835  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.357844  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:12.357851  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:12.357912  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:12.383803  620795 cri.go:89] found id: ""
	I1213 12:05:12.383827  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.383836  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:12.383842  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:12.383902  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:12.408966  620795 cri.go:89] found id: ""
	I1213 12:05:12.409044  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.409061  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:12.409069  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:12.409183  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:12.438466  620795 cri.go:89] found id: ""
	I1213 12:05:12.438491  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.438499  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:12.438506  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:12.438562  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:12.468347  620795 cri.go:89] found id: ""
	I1213 12:05:12.468375  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.468385  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:12.468391  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:12.468455  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:12.493833  620795 cri.go:89] found id: ""
	I1213 12:05:12.493860  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.493869  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:12.493876  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:12.493936  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:12.540091  620795 cri.go:89] found id: ""
	I1213 12:05:12.540120  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.540130  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:12.540137  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:12.540202  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:12.593138  620795 cri.go:89] found id: ""
	I1213 12:05:12.593165  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.593174  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:12.593184  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:12.593195  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:12.670751  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:12.670790  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:12.688162  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:12.688196  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:12.753953  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:12.745930    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.746540    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.748217    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.748692    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.750290    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:12.745930    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.746540    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.748217    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.748692    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.750290    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:12.753978  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:12.753990  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:12.782410  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:12.782447  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:15.314766  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:15.325177  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:15.325244  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:15.350233  620795 cri.go:89] found id: ""
	I1213 12:05:15.350259  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.350269  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:15.350276  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:15.350332  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:15.375095  620795 cri.go:89] found id: ""
	I1213 12:05:15.375121  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.375131  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:15.375138  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:15.375198  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:15.400509  620795 cri.go:89] found id: ""
	I1213 12:05:15.400531  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.400539  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:15.400545  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:15.400604  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:15.429727  620795 cri.go:89] found id: ""
	I1213 12:05:15.429749  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.429758  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:15.429765  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:15.429818  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:15.455300  620795 cri.go:89] found id: ""
	I1213 12:05:15.455321  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.455330  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:15.455336  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:15.455393  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:15.480516  620795 cri.go:89] found id: ""
	I1213 12:05:15.480540  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.480549  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:15.480556  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:15.480617  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:15.508281  620795 cri.go:89] found id: ""
	I1213 12:05:15.508358  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.508375  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:15.508382  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:15.508453  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:15.569260  620795 cri.go:89] found id: ""
	I1213 12:05:15.569286  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.569295  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:15.569304  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:15.569317  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:15.653590  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:15.653630  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:15.670770  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:15.670805  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:15.734152  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:15.725752    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.726494    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.728223    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.728860    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.730656    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:15.725752    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.726494    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.728223    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.728860    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.730656    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:15.734221  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:15.734248  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:15.762906  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:15.762941  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:18.292789  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:18.303334  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:18.303410  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:18.329348  620795 cri.go:89] found id: ""
	I1213 12:05:18.329372  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.329382  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:18.329389  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:18.329455  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:18.358617  620795 cri.go:89] found id: ""
	I1213 12:05:18.358638  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.358647  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:18.358653  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:18.358710  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:18.383565  620795 cri.go:89] found id: ""
	I1213 12:05:18.383589  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.383597  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:18.383603  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:18.383666  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:18.409351  620795 cri.go:89] found id: ""
	I1213 12:05:18.409378  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.409387  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:18.409394  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:18.409456  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:18.435771  620795 cri.go:89] found id: ""
	I1213 12:05:18.435797  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.435806  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:18.435813  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:18.435875  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:18.464513  620795 cri.go:89] found id: ""
	I1213 12:05:18.464539  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.464549  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:18.464556  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:18.464659  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:18.490219  620795 cri.go:89] found id: ""
	I1213 12:05:18.490244  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.490252  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:18.490260  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:18.490317  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:18.532969  620795 cri.go:89] found id: ""
	I1213 12:05:18.532995  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.533004  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:18.533013  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:18.533027  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:18.595123  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:18.595154  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:18.672161  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:18.672201  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:18.689194  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:18.689222  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:18.754503  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:18.745575    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.746298    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.748026    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.748666    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.750610    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:18.745575    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.746298    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.748026    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.748666    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.750610    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:18.754526  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:18.754539  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:21.283365  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:21.294092  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:21.294183  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:21.321526  620795 cri.go:89] found id: ""
	I1213 12:05:21.321549  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.321559  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:21.321565  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:21.321622  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:21.349919  620795 cri.go:89] found id: ""
	I1213 12:05:21.349943  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.349952  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:21.349958  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:21.350021  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:21.379881  620795 cri.go:89] found id: ""
	I1213 12:05:21.379906  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.379915  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:21.379922  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:21.379982  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:21.405656  620795 cri.go:89] found id: ""
	I1213 12:05:21.405679  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.405687  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:21.405694  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:21.405754  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:21.435716  620795 cri.go:89] found id: ""
	I1213 12:05:21.435752  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.435762  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:21.435769  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:21.435839  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:21.461176  620795 cri.go:89] found id: ""
	I1213 12:05:21.461199  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.461207  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:21.461214  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:21.461271  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:21.487321  620795 cri.go:89] found id: ""
	I1213 12:05:21.487357  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.487366  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:21.487372  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:21.487438  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:21.513663  620795 cri.go:89] found id: ""
	I1213 12:05:21.513687  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.513696  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:21.513706  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:21.513740  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:21.547538  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:21.547713  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:21.648986  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:21.641895    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.642288    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.643954    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.644494    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.645453    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:21.641895    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.642288    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.643954    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.644494    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.645453    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:21.649007  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:21.649020  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:21.676895  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:21.676929  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:21.706237  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:21.706268  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:24.271406  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:24.281916  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:24.281984  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:24.306547  620795 cri.go:89] found id: ""
	I1213 12:05:24.306570  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.306579  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:24.306586  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:24.306645  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:24.334194  620795 cri.go:89] found id: ""
	I1213 12:05:24.334218  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.334227  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:24.334234  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:24.334291  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:24.360113  620795 cri.go:89] found id: ""
	I1213 12:05:24.360139  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.360148  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:24.360154  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:24.360219  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:24.385854  620795 cri.go:89] found id: ""
	I1213 12:05:24.385879  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.385889  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:24.385896  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:24.385960  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:24.411999  620795 cri.go:89] found id: ""
	I1213 12:05:24.412025  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.412034  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:24.412042  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:24.412102  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:24.438300  620795 cri.go:89] found id: ""
	I1213 12:05:24.438325  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.438335  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:24.438347  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:24.438405  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:24.464325  620795 cri.go:89] found id: ""
	I1213 12:05:24.464351  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.464361  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:24.464369  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:24.464430  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:24.491896  620795 cri.go:89] found id: ""
	I1213 12:05:24.491920  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.491930  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:24.491939  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:24.491971  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:24.519363  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:24.519445  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:24.616473  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:24.616502  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:24.692608  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:24.692645  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:24.711650  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:24.711689  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:24.775602  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:24.767043    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.768309    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.769606    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.770273    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.771935    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:24.767043    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.768309    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.769606    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.770273    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.771935    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:27.275849  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:27.286597  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:27.286680  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:27.311787  620795 cri.go:89] found id: ""
	I1213 12:05:27.311813  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.311822  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:27.311829  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:27.311893  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:27.341056  620795 cri.go:89] found id: ""
	I1213 12:05:27.341123  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.341146  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:27.341160  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:27.341233  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:27.365944  620795 cri.go:89] found id: ""
	I1213 12:05:27.365978  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.365986  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:27.365993  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:27.366057  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:27.390576  620795 cri.go:89] found id: ""
	I1213 12:05:27.390611  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.390626  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:27.390633  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:27.390702  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:27.420415  620795 cri.go:89] found id: ""
	I1213 12:05:27.420439  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.420448  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:27.420454  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:27.420516  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:27.445745  620795 cri.go:89] found id: ""
	I1213 12:05:27.445812  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.445835  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:27.445853  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:27.445936  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:27.475470  620795 cri.go:89] found id: ""
	I1213 12:05:27.475508  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.475538  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:27.475547  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:27.475615  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:27.502195  620795 cri.go:89] found id: ""
	I1213 12:05:27.502222  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.502231  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:27.502240  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:27.502252  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:27.597636  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:27.597744  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:27.629736  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:27.629763  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:27.694305  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:27.686679    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.687417    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.688918    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.689354    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.690840    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:27.686679    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.687417    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.688918    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.689354    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.690840    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:27.694327  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:27.694339  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:27.723090  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:27.723129  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:30.253217  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:30.264373  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:30.264446  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:30.290413  620795 cri.go:89] found id: ""
	I1213 12:05:30.290440  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.290450  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:30.290457  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:30.290517  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:30.318052  620795 cri.go:89] found id: ""
	I1213 12:05:30.318079  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.318096  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:30.318104  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:30.318172  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:30.343233  620795 cri.go:89] found id: ""
	I1213 12:05:30.343267  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.343277  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:30.343283  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:30.343349  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:30.373053  620795 cri.go:89] found id: ""
	I1213 12:05:30.373077  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.373086  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:30.373092  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:30.373149  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:30.401783  620795 cri.go:89] found id: ""
	I1213 12:05:30.401862  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.401879  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:30.401886  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:30.401955  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:30.427557  620795 cri.go:89] found id: ""
	I1213 12:05:30.427580  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.427589  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:30.427595  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:30.427652  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:30.452324  620795 cri.go:89] found id: ""
	I1213 12:05:30.452404  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.452426  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:30.452445  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:30.452538  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:30.485213  620795 cri.go:89] found id: ""
	I1213 12:05:30.485283  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.485307  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:30.485325  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:30.485337  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:30.567099  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:30.571250  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:30.599905  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:30.599987  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:30.671402  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:30.663820    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.664552    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.665833    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.666310    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.667892    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:30.663820    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.664552    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.665833    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.666310    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.667892    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:30.671475  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:30.671544  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:30.700275  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:30.700310  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:33.229307  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:33.240030  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:33.240101  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:33.264516  620795 cri.go:89] found id: ""
	I1213 12:05:33.264540  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.264550  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:33.264557  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:33.264622  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:33.288665  620795 cri.go:89] found id: ""
	I1213 12:05:33.288694  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.288704  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:33.288711  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:33.288772  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:33.318238  620795 cri.go:89] found id: ""
	I1213 12:05:33.318314  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.318338  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:33.318356  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:33.318437  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:33.342548  620795 cri.go:89] found id: ""
	I1213 12:05:33.342582  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.342592  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:33.342598  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:33.342667  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:33.368791  620795 cri.go:89] found id: ""
	I1213 12:05:33.368814  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.368823  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:33.368829  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:33.368887  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:33.395218  620795 cri.go:89] found id: ""
	I1213 12:05:33.395254  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.395263  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:33.395270  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:33.395342  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:33.422228  620795 cri.go:89] found id: ""
	I1213 12:05:33.422263  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.422272  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:33.422279  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:33.422345  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:33.448101  620795 cri.go:89] found id: ""
	I1213 12:05:33.448126  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.448136  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:33.448146  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:33.448164  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:33.513958  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:33.513995  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:33.536519  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:33.536547  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:33.642718  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:33.634504    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.635083    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.636790    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.637471    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.638479    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:33.634504    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.635083    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.636790    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.637471    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.638479    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:33.642742  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:33.642757  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:33.671233  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:33.671268  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:36.205718  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:36.216490  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:36.216599  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:36.242239  620795 cri.go:89] found id: ""
	I1213 12:05:36.242267  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.242277  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:36.242284  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:36.242345  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:36.267114  620795 cri.go:89] found id: ""
	I1213 12:05:36.267140  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.267149  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:36.267155  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:36.267221  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:36.292484  620795 cri.go:89] found id: ""
	I1213 12:05:36.292510  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.292519  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:36.292525  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:36.292586  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:36.317342  620795 cri.go:89] found id: ""
	I1213 12:05:36.317365  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.317374  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:36.317380  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:36.317442  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:36.346675  620795 cri.go:89] found id: ""
	I1213 12:05:36.346746  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.346770  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:36.346788  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:36.346878  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:36.374350  620795 cri.go:89] found id: ""
	I1213 12:05:36.374416  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.374440  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:36.374459  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:36.374550  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:36.401836  620795 cri.go:89] found id: ""
	I1213 12:05:36.401904  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.401927  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:36.401947  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:36.402023  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:36.436530  620795 cri.go:89] found id: ""
	I1213 12:05:36.436612  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.436635  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:36.436653  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:36.436680  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:36.464595  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:36.464663  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:36.550070  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:36.550121  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:36.581383  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:36.581414  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:36.674763  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:36.666501    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.667311    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.668765    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.669457    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.671114    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:36.666501    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.667311    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.668765    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.669457    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.671114    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:36.674830  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:36.674854  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:39.203663  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:39.214134  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:39.214211  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:39.240674  620795 cri.go:89] found id: ""
	I1213 12:05:39.240705  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.240714  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:39.240721  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:39.240786  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:39.265873  620795 cri.go:89] found id: ""
	I1213 12:05:39.265895  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.265903  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:39.265909  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:39.265966  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:39.291928  620795 cri.go:89] found id: ""
	I1213 12:05:39.291952  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.291960  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:39.291978  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:39.292037  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:39.317111  620795 cri.go:89] found id: ""
	I1213 12:05:39.317144  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.317153  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:39.317160  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:39.317219  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:39.341971  620795 cri.go:89] found id: ""
	I1213 12:05:39.341993  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.342002  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:39.342009  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:39.342065  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:39.370095  620795 cri.go:89] found id: ""
	I1213 12:05:39.370166  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.370192  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:39.370212  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:39.370297  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:39.396661  620795 cri.go:89] found id: ""
	I1213 12:05:39.396740  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.396765  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:39.396777  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:39.396855  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:39.426139  620795 cri.go:89] found id: ""
	I1213 12:05:39.426167  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.426177  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:39.426188  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:39.426199  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:39.458970  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:39.459002  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:39.525484  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:39.525523  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:39.554066  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:39.554149  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:39.647487  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:39.639358    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.640049    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.641742    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.642473    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.644045    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:39.639358    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.640049    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.641742    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.642473    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.644045    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:39.647508  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:39.647543  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:42.175675  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:42.189064  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:42.189149  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:42.220105  620795 cri.go:89] found id: ""
	I1213 12:05:42.220135  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.220156  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:42.220164  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:42.220229  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:42.250459  620795 cri.go:89] found id: ""
	I1213 12:05:42.250486  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.250495  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:42.250502  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:42.250570  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:42.278746  620795 cri.go:89] found id: ""
	I1213 12:05:42.278773  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.278785  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:42.278793  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:42.278855  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:42.307046  620795 cri.go:89] found id: ""
	I1213 12:05:42.307073  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.307083  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:42.307092  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:42.307153  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:42.335010  620795 cri.go:89] found id: ""
	I1213 12:05:42.335035  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.335046  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:42.335052  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:42.335114  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:42.362128  620795 cri.go:89] found id: ""
	I1213 12:05:42.362154  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.362163  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:42.362170  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:42.362231  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:42.396146  620795 cri.go:89] found id: ""
	I1213 12:05:42.396175  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.396186  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:42.396193  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:42.396254  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:42.423111  620795 cri.go:89] found id: ""
	I1213 12:05:42.423137  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.423146  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:42.423155  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:42.423167  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:42.440295  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:42.440325  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:42.504038  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:42.496153    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.496984    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.498582    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.499023    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.500536    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:42.496153    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.496984    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.498582    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.499023    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.500536    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:42.504059  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:42.504071  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:42.550928  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:42.550966  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:42.608904  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:42.608935  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:45.181124  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:45.197731  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:45.197873  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:45.246027  620795 cri.go:89] found id: ""
	I1213 12:05:45.246070  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.246081  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:45.246106  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:45.246220  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:45.279332  620795 cri.go:89] found id: ""
	I1213 12:05:45.279388  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.279398  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:45.279404  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:45.279509  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:45.314910  620795 cri.go:89] found id: ""
	I1213 12:05:45.314988  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.315000  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:45.315010  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:45.315114  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:45.343055  620795 cri.go:89] found id: ""
	I1213 12:05:45.343130  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.343153  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:45.343175  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:45.343282  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:45.370166  620795 cri.go:89] found id: ""
	I1213 12:05:45.370240  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.370275  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:45.370299  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:45.370391  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:45.396456  620795 cri.go:89] found id: ""
	I1213 12:05:45.396480  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.396489  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:45.396495  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:45.396550  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:45.421687  620795 cri.go:89] found id: ""
	I1213 12:05:45.421711  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.421720  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:45.421726  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:45.421781  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:45.446648  620795 cri.go:89] found id: ""
	I1213 12:05:45.446672  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.446681  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:45.446691  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:45.446702  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:45.512020  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:45.512055  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:45.543051  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:45.543084  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:45.640767  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:45.633029    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.633452    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.634983    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.635597    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.637148    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:45.633029    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.633452    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.634983    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.635597    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.637148    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:45.640789  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:45.640802  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:45.670787  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:45.670822  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:48.201632  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:48.211975  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:48.212046  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:48.241331  620795 cri.go:89] found id: ""
	I1213 12:05:48.241355  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.241364  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:48.241371  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:48.241430  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:48.266481  620795 cri.go:89] found id: ""
	I1213 12:05:48.266506  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.266515  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:48.266523  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:48.266581  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:48.292562  620795 cri.go:89] found id: ""
	I1213 12:05:48.292587  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.292597  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:48.292604  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:48.292666  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:48.316829  620795 cri.go:89] found id: ""
	I1213 12:05:48.316853  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.316862  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:48.316869  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:48.316928  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:48.341279  620795 cri.go:89] found id: ""
	I1213 12:05:48.341304  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.341313  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:48.341320  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:48.341395  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:48.370602  620795 cri.go:89] found id: ""
	I1213 12:05:48.370668  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.370684  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:48.370692  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:48.370757  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:48.395975  620795 cri.go:89] found id: ""
	I1213 12:05:48.396001  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.396011  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:48.396017  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:48.396076  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:48.422104  620795 cri.go:89] found id: ""
	I1213 12:05:48.422129  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.422139  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:48.422150  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:48.422163  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:48.487414  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:48.487451  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:48.504893  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:48.504924  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:48.613440  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:48.605194    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.606037    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.607690    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.608269    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.609790    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:48.605194    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.606037    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.607690    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.608269    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.609790    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:48.613472  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:48.613485  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:48.643454  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:48.643496  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:51.173081  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:51.184091  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:51.184220  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:51.209714  620795 cri.go:89] found id: ""
	I1213 12:05:51.209741  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.209751  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:51.209757  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:51.209815  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:51.236381  620795 cri.go:89] found id: ""
	I1213 12:05:51.236414  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.236423  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:51.236429  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:51.236495  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:51.266394  620795 cri.go:89] found id: ""
	I1213 12:05:51.266428  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.266437  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:51.266443  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:51.266509  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:51.293949  620795 cri.go:89] found id: ""
	I1213 12:05:51.293981  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.293991  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:51.293998  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:51.294062  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:51.324019  620795 cri.go:89] found id: ""
	I1213 12:05:51.324042  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.324056  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:51.324062  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:51.324145  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:51.352992  620795 cri.go:89] found id: ""
	I1213 12:05:51.353023  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.353032  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:51.353039  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:51.353098  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:51.378872  620795 cri.go:89] found id: ""
	I1213 12:05:51.378898  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.378907  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:51.378914  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:51.378976  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:51.406670  620795 cri.go:89] found id: ""
	I1213 12:05:51.406695  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.406703  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:51.406713  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:51.406728  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:51.469269  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:51.461277    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.461921    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.463438    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.463899    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.465468    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:51.461277    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.461921    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.463438    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.463899    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.465468    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:51.469290  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:51.469304  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:51.497318  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:51.497352  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:51.534646  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:51.534680  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:51.618348  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:51.618388  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:54.137197  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:54.147708  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:54.147778  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:54.173064  620795 cri.go:89] found id: ""
	I1213 12:05:54.173089  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.173098  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:54.173105  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:54.173164  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:54.198688  620795 cri.go:89] found id: ""
	I1213 12:05:54.198713  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.198723  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:54.198733  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:54.198789  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:54.224472  620795 cri.go:89] found id: ""
	I1213 12:05:54.224497  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.224506  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:54.224512  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:54.224571  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:54.254875  620795 cri.go:89] found id: ""
	I1213 12:05:54.254900  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.254909  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:54.254916  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:54.254985  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:54.286287  620795 cri.go:89] found id: ""
	I1213 12:05:54.286314  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.286322  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:54.286329  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:54.286384  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:54.312009  620795 cri.go:89] found id: ""
	I1213 12:05:54.312034  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.312043  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:54.312050  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:54.312109  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:54.338472  620795 cri.go:89] found id: ""
	I1213 12:05:54.338506  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.338516  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:54.338522  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:54.338590  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:54.363767  620795 cri.go:89] found id: ""
	I1213 12:05:54.363791  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.363799  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:54.363810  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:54.363827  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:54.429426  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:54.429462  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:54.446820  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:54.446859  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:54.514113  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:54.505503    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.506092    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.507709    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.508420    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.510180    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:54.505503    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.506092    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.507709    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.508420    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.510180    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:54.514137  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:54.514150  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:54.547597  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:54.547688  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:57.126156  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:57.136777  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:57.136854  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:57.166084  620795 cri.go:89] found id: ""
	I1213 12:05:57.166107  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.166116  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:57.166122  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:57.166180  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:57.194344  620795 cri.go:89] found id: ""
	I1213 12:05:57.194368  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.194377  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:57.194384  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:57.194445  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:57.220264  620795 cri.go:89] found id: ""
	I1213 12:05:57.220289  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.220298  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:57.220305  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:57.220362  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:57.245200  620795 cri.go:89] found id: ""
	I1213 12:05:57.245222  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.245230  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:57.245236  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:57.245292  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:57.272963  620795 cri.go:89] found id: ""
	I1213 12:05:57.272987  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.272996  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:57.273003  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:57.273061  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:57.297916  620795 cri.go:89] found id: ""
	I1213 12:05:57.297940  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.297947  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:57.297954  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:57.298016  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:57.323201  620795 cri.go:89] found id: ""
	I1213 12:05:57.323226  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.323235  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:57.323241  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:57.323301  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:57.348727  620795 cri.go:89] found id: ""
	I1213 12:05:57.348759  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.348769  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:57.348779  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:57.348794  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:57.424991  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:57.416858    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.417506    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.419207    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.419713    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.421359    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:57.416858    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.417506    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.419207    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.419713    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.421359    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:57.425015  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:57.425027  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:57.454618  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:57.454652  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:57.482599  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:57.482627  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:57.556901  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:57.556982  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:00.078226  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:00.114729  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:00.114815  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:00.214510  620795 cri.go:89] found id: ""
	I1213 12:06:00.214537  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.214547  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:00.214560  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:00.214644  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:00.283401  620795 cri.go:89] found id: ""
	I1213 12:06:00.283433  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.283443  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:00.283450  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:00.283564  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:00.333853  620795 cri.go:89] found id: ""
	I1213 12:06:00.333946  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.333974  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:00.333999  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:00.334124  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:00.370564  620795 cri.go:89] found id: ""
	I1213 12:06:00.370647  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.370670  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:00.370693  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:00.370796  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:00.400318  620795 cri.go:89] found id: ""
	I1213 12:06:00.400355  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.400365  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:00.400373  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:00.400451  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:00.429349  620795 cri.go:89] found id: ""
	I1213 12:06:00.429376  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.429387  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:00.429394  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:00.429480  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:00.457513  620795 cri.go:89] found id: ""
	I1213 12:06:00.457540  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.457549  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:00.457555  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:00.457617  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:00.484050  620795 cri.go:89] found id: ""
	I1213 12:06:00.484077  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.484086  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:00.484096  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:00.484110  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:00.564314  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:00.564357  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:00.586853  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:00.586884  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:00.678609  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:00.670112    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.670780    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.672403    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.672752    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.674443    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:00.670112    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.670780    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.672403    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.672752    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.674443    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:00.678679  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:00.678699  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:00.708726  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:00.708764  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:03.239868  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:03.250271  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:03.250342  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:03.278221  620795 cri.go:89] found id: ""
	I1213 12:06:03.278246  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.278254  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:03.278261  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:03.278323  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:03.307255  620795 cri.go:89] found id: ""
	I1213 12:06:03.307280  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.307288  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:03.307295  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:03.307358  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:03.334371  620795 cri.go:89] found id: ""
	I1213 12:06:03.334394  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.334402  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:03.334408  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:03.334465  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:03.359920  620795 cri.go:89] found id: ""
	I1213 12:06:03.359947  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.359959  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:03.359966  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:03.360026  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:03.388349  620795 cri.go:89] found id: ""
	I1213 12:06:03.388373  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.388382  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:03.388389  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:03.388446  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:03.413684  620795 cri.go:89] found id: ""
	I1213 12:06:03.413712  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.413721  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:03.413727  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:03.413786  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:03.438590  620795 cri.go:89] found id: ""
	I1213 12:06:03.438613  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.438622  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:03.438629  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:03.438686  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:03.466031  620795 cri.go:89] found id: ""
	I1213 12:06:03.466065  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.466074  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:03.466084  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:03.466095  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:03.540002  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:03.540037  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:03.581254  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:03.581285  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:03.657609  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:03.648962    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.649736    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.651545    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.652112    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.653889    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:03.648962    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.649736    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.651545    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.652112    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.653889    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:03.657641  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:03.657654  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:03.686248  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:03.686284  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:06.215254  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:06.226059  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:06.226130  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:06.252206  620795 cri.go:89] found id: ""
	I1213 12:06:06.252229  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.252237  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:06.252243  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:06.252306  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:06.282327  620795 cri.go:89] found id: ""
	I1213 12:06:06.282349  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.282358  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:06.282364  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:06.282425  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:06.312866  620795 cri.go:89] found id: ""
	I1213 12:06:06.312889  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.312898  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:06.312905  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:06.312964  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:06.339757  620795 cri.go:89] found id: ""
	I1213 12:06:06.339828  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.339851  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:06.339865  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:06.339937  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:06.366465  620795 cri.go:89] found id: ""
	I1213 12:06:06.366491  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.366508  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:06.366515  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:06.366589  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:06.395704  620795 cri.go:89] found id: ""
	I1213 12:06:06.395727  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.395735  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:06.395742  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:06.395800  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:06.420941  620795 cri.go:89] found id: ""
	I1213 12:06:06.420966  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.420974  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:06.420981  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:06.421040  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:06.446747  620795 cri.go:89] found id: ""
	I1213 12:06:06.446771  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.446781  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:06.446790  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:06.446802  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:06.515396  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:06.515437  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:06.537368  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:06.537458  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:06.638118  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:06.626710    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.630084    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.630705    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.632330    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.632805    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:06.626710    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.630084    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.630705    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.632330    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.632805    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:06.638202  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:06.638230  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:06.668749  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:06.668789  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:09.204205  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:09.214694  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:09.214763  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:09.240252  620795 cri.go:89] found id: ""
	I1213 12:06:09.240291  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.240301  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:09.240307  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:09.240372  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:09.267161  620795 cri.go:89] found id: ""
	I1213 12:06:09.267188  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.267197  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:09.267203  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:09.267263  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:09.292472  620795 cri.go:89] found id: ""
	I1213 12:06:09.292501  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.292510  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:09.292517  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:09.292581  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:09.317718  620795 cri.go:89] found id: ""
	I1213 12:06:09.317745  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.317754  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:09.317760  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:09.317819  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:09.342979  620795 cri.go:89] found id: ""
	I1213 12:06:09.343006  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.343015  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:09.343021  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:09.343080  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:09.370344  620795 cri.go:89] found id: ""
	I1213 12:06:09.370368  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.370377  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:09.370383  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:09.370441  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:09.397428  620795 cri.go:89] found id: ""
	I1213 12:06:09.397451  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.397461  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:09.397467  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:09.397527  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:09.422862  620795 cri.go:89] found id: ""
	I1213 12:06:09.422890  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.422900  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:09.422909  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:09.422923  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:09.486031  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:09.478519    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.478948    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.480477    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.480972    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.482466    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:09.478519    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.478948    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.480477    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.480972    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.482466    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:09.486057  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:09.486070  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:09.514736  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:09.514772  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:09.586482  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:09.586558  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:09.660422  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:09.660459  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:12.179299  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:12.190230  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:12.190302  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:12.216052  620795 cri.go:89] found id: ""
	I1213 12:06:12.216076  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.216085  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:12.216092  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:12.216150  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:12.245417  620795 cri.go:89] found id: ""
	I1213 12:06:12.245443  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.245453  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:12.245460  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:12.245525  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:12.272357  620795 cri.go:89] found id: ""
	I1213 12:06:12.272382  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.272391  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:12.272397  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:12.272459  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:12.297431  620795 cri.go:89] found id: ""
	I1213 12:06:12.297458  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.297467  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:12.297479  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:12.297537  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:12.322773  620795 cri.go:89] found id: ""
	I1213 12:06:12.322796  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.322805  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:12.322829  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:12.322894  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:12.348212  620795 cri.go:89] found id: ""
	I1213 12:06:12.348278  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.348293  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:12.348301  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:12.348360  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:12.378078  620795 cri.go:89] found id: ""
	I1213 12:06:12.378105  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.378115  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:12.378122  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:12.378186  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:12.403938  620795 cri.go:89] found id: ""
	I1213 12:06:12.404005  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.404029  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:12.404044  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:12.404056  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:12.432395  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:12.432433  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:12.465021  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:12.465055  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:12.533527  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:12.533564  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:12.557847  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:12.557876  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:12.649280  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:12.641558    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.641947    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.643630    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.644072    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.645646    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:12.641558    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.641947    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.643630    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.644072    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.645646    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:15.150199  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:15.161093  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:15.161164  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:15.188375  620795 cri.go:89] found id: ""
	I1213 12:06:15.188402  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.188411  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:15.188420  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:15.188494  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:15.213569  620795 cri.go:89] found id: ""
	I1213 12:06:15.213592  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.213601  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:15.213607  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:15.213667  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:15.244468  620795 cri.go:89] found id: ""
	I1213 12:06:15.244490  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.244499  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:15.244505  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:15.244565  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:15.269446  620795 cri.go:89] found id: ""
	I1213 12:06:15.269469  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.269478  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:15.269484  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:15.269544  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:15.297921  620795 cri.go:89] found id: ""
	I1213 12:06:15.297947  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.297957  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:15.297965  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:15.298029  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:15.323225  620795 cri.go:89] found id: ""
	I1213 12:06:15.323248  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.323256  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:15.323263  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:15.323322  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:15.349965  620795 cri.go:89] found id: ""
	I1213 12:06:15.349988  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.349999  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:15.350005  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:15.350067  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:15.378207  620795 cri.go:89] found id: ""
	I1213 12:06:15.378236  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.378247  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:15.378258  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:15.378271  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:15.443150  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:15.443182  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:15.459353  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:15.459388  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:15.546545  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:15.517236    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.519883    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.520609    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.528433    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.536550    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:15.517236    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.519883    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.520609    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.528433    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.536550    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:15.546611  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:15.546638  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:15.582173  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:15.582258  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:18.126037  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:18.137115  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:18.137190  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:18.164991  620795 cri.go:89] found id: ""
	I1213 12:06:18.165017  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.165026  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:18.165033  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:18.165092  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:18.191806  620795 cri.go:89] found id: ""
	I1213 12:06:18.191832  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.191841  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:18.191848  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:18.191906  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:18.222284  620795 cri.go:89] found id: ""
	I1213 12:06:18.222310  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.222320  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:18.222329  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:18.222389  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:18.250305  620795 cri.go:89] found id: ""
	I1213 12:06:18.250332  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.250342  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:18.250348  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:18.250406  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:18.276798  620795 cri.go:89] found id: ""
	I1213 12:06:18.276823  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.276833  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:18.276841  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:18.276901  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:18.301916  620795 cri.go:89] found id: ""
	I1213 12:06:18.301943  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.301952  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:18.301959  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:18.302017  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:18.327545  620795 cri.go:89] found id: ""
	I1213 12:06:18.327569  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.327577  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:18.327584  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:18.327681  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:18.352817  620795 cri.go:89] found id: ""
	I1213 12:06:18.352844  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.352854  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:18.352863  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:18.352902  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:18.418564  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:18.418601  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:18.434897  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:18.434928  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:18.499340  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:18.490649    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.491423    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.492978    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.493531    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.495112    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:18.490649    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.491423    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.492978    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.493531    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.495112    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:18.499366  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:18.499380  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:18.528897  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:18.528980  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:21.104122  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:21.114671  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:21.114786  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:21.140990  620795 cri.go:89] found id: ""
	I1213 12:06:21.141014  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.141024  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:21.141030  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:21.141087  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:21.168480  620795 cri.go:89] found id: ""
	I1213 12:06:21.168510  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.168519  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:21.168526  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:21.168583  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:21.193893  620795 cri.go:89] found id: ""
	I1213 12:06:21.193916  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.193924  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:21.193930  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:21.193985  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:21.222789  620795 cri.go:89] found id: ""
	I1213 12:06:21.222811  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.222820  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:21.222827  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:21.222885  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:21.254379  620795 cri.go:89] found id: ""
	I1213 12:06:21.254402  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.254411  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:21.254417  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:21.254476  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:21.280020  620795 cri.go:89] found id: ""
	I1213 12:06:21.280049  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.280058  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:21.280065  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:21.280123  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:21.305920  620795 cri.go:89] found id: ""
	I1213 12:06:21.305942  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.305952  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:21.305957  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:21.306031  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:21.334376  620795 cri.go:89] found id: ""
	I1213 12:06:21.334400  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.334409  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:21.334417  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:21.334429  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:21.362868  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:21.362906  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:21.397678  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:21.397727  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:21.465535  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:21.465574  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:21.482417  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:21.482443  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:21.566636  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:21.557499    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.558882    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.559834    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.561441    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.561752    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:21.557499    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.558882    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.559834    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.561441    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.561752    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:24.068339  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:24.079607  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:24.079684  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:24.105575  620795 cri.go:89] found id: ""
	I1213 12:06:24.105609  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.105619  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:24.105626  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:24.105696  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:24.131798  620795 cri.go:89] found id: ""
	I1213 12:06:24.131830  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.131840  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:24.131846  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:24.131905  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:24.157068  620795 cri.go:89] found id: ""
	I1213 12:06:24.157096  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.157106  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:24.157113  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:24.157168  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:24.186737  620795 cri.go:89] found id: ""
	I1213 12:06:24.186762  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.186772  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:24.186779  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:24.186843  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:24.214700  620795 cri.go:89] found id: ""
	I1213 12:06:24.214726  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.214745  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:24.214751  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:24.214815  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:24.242048  620795 cri.go:89] found id: ""
	I1213 12:06:24.242074  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.242083  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:24.242090  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:24.242180  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:24.270953  620795 cri.go:89] found id: ""
	I1213 12:06:24.270978  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.270987  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:24.270994  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:24.271074  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:24.296220  620795 cri.go:89] found id: ""
	I1213 12:06:24.296246  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.296256  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:24.296267  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:24.296278  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:24.325330  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:24.325367  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:24.355217  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:24.355255  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:24.421526  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:24.421566  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:24.438978  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:24.439012  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:24.514169  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:24.505564    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.506202    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.507961    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.508730    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.510229    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:24.505564    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.506202    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.507961    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.508730    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.510229    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:27.015192  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:27.026779  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:27.026871  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:27.054321  620795 cri.go:89] found id: ""
	I1213 12:06:27.054347  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.054357  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:27.054364  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:27.054423  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:27.084443  620795 cri.go:89] found id: ""
	I1213 12:06:27.084467  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.084476  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:27.084482  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:27.084542  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:27.110224  620795 cri.go:89] found id: ""
	I1213 12:06:27.110251  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.110260  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:27.110267  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:27.110326  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:27.141821  620795 cri.go:89] found id: ""
	I1213 12:06:27.141847  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.141857  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:27.141863  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:27.141953  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:27.168110  620795 cri.go:89] found id: ""
	I1213 12:06:27.168143  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.168153  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:27.168160  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:27.168228  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:27.193708  620795 cri.go:89] found id: ""
	I1213 12:06:27.193775  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.193791  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:27.193802  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:27.193862  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:27.220542  620795 cri.go:89] found id: ""
	I1213 12:06:27.220569  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.220578  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:27.220585  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:27.220673  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:27.248536  620795 cri.go:89] found id: ""
	I1213 12:06:27.248614  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.248630  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:27.248641  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:27.248653  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:27.314354  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:27.314389  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:27.331795  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:27.331824  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:27.397269  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:27.389020    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.389779    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.391484    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.391978    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.393471    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:27.389020    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.389779    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.391484    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.391978    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.393471    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:27.397290  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:27.397303  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:27.425995  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:27.426034  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:29.964336  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:29.975190  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:29.975264  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:30.020235  620795 cri.go:89] found id: ""
	I1213 12:06:30.020330  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.020353  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:30.020373  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:30.020492  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:30.064384  620795 cri.go:89] found id: ""
	I1213 12:06:30.064422  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.064431  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:30.064438  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:30.064537  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:30.093930  620795 cri.go:89] found id: ""
	I1213 12:06:30.093974  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.094003  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:30.094018  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:30.094092  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:30.121799  620795 cri.go:89] found id: ""
	I1213 12:06:30.121830  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.121846  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:30.121854  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:30.121994  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:30.150127  620795 cri.go:89] found id: ""
	I1213 12:06:30.150153  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.150163  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:30.150170  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:30.150232  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:30.177848  620795 cri.go:89] found id: ""
	I1213 12:06:30.177873  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.177883  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:30.177889  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:30.177948  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:30.204179  620795 cri.go:89] found id: ""
	I1213 12:06:30.204216  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.204225  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:30.204235  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:30.204295  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:30.230625  620795 cri.go:89] found id: ""
	I1213 12:06:30.230653  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.230663  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:30.230673  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:30.230685  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:30.297598  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:30.297634  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:30.314962  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:30.314993  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:30.380114  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:30.371745    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.372555    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.374185    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.374477    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.376001    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:30.371745    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.372555    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.374185    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.374477    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.376001    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:30.380136  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:30.380148  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:30.408485  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:30.408523  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:32.936773  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:32.947334  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:32.947408  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:32.974265  620795 cri.go:89] found id: ""
	I1213 12:06:32.974291  620795 logs.go:282] 0 containers: []
	W1213 12:06:32.974300  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:32.974307  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:32.974365  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:33.005585  620795 cri.go:89] found id: ""
	I1213 12:06:33.005616  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.005627  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:33.005633  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:33.005704  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:33.036036  620795 cri.go:89] found id: ""
	I1213 12:06:33.036058  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.036072  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:33.036079  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:33.036136  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:33.062415  620795 cri.go:89] found id: ""
	I1213 12:06:33.062439  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.062448  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:33.062455  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:33.062515  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:33.091004  620795 cri.go:89] found id: ""
	I1213 12:06:33.091072  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.091095  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:33.091115  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:33.091193  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:33.116964  620795 cri.go:89] found id: ""
	I1213 12:06:33.116989  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.116999  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:33.117005  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:33.117084  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:33.143886  620795 cri.go:89] found id: ""
	I1213 12:06:33.143908  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.143918  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:33.143924  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:33.143984  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:33.177672  620795 cri.go:89] found id: ""
	I1213 12:06:33.177697  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.177707  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:33.177716  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:33.177728  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:33.194235  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:33.194266  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:33.258679  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:33.250574    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.251172    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.252678    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.253209    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.254656    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:33.250574    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.251172    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.252678    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.253209    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.254656    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:33.258703  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:33.258715  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:33.287694  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:33.287731  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:33.319142  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:33.319168  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:35.883653  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:35.894470  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:35.894540  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:35.922164  620795 cri.go:89] found id: ""
	I1213 12:06:35.922243  620795 logs.go:282] 0 containers: []
	W1213 12:06:35.922268  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:35.922286  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:35.922378  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:35.948794  620795 cri.go:89] found id: ""
	I1213 12:06:35.948824  620795 logs.go:282] 0 containers: []
	W1213 12:06:35.948833  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:35.948840  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:35.948916  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:35.976985  620795 cri.go:89] found id: ""
	I1213 12:06:35.977012  620795 logs.go:282] 0 containers: []
	W1213 12:06:35.977023  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:35.977030  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:35.977097  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:36.008179  620795 cri.go:89] found id: ""
	I1213 12:06:36.008210  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.008221  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:36.008229  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:36.008306  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:36.037414  620795 cri.go:89] found id: ""
	I1213 12:06:36.037434  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.037442  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:36.037448  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:36.037505  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:36.066253  620795 cri.go:89] found id: ""
	I1213 12:06:36.066290  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.066304  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:36.066319  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:36.066394  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:36.093841  620795 cri.go:89] found id: ""
	I1213 12:06:36.093938  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.093955  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:36.093963  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:36.094042  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:36.119692  620795 cri.go:89] found id: ""
	I1213 12:06:36.119728  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.119737  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:36.119747  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:36.119761  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:36.136247  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:36.136322  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:36.202464  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:36.194729    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.195344    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.196865    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.197429    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.198995    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:36.194729    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.195344    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.196865    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.197429    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.198995    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:36.202486  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:36.202500  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:36.230571  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:36.230606  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:36.257928  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:36.257955  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:38.826068  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:38.841833  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:38.841915  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:38.871763  620795 cri.go:89] found id: ""
	I1213 12:06:38.871788  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.871797  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:38.871803  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:38.871870  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:38.897931  620795 cri.go:89] found id: ""
	I1213 12:06:38.897956  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.897966  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:38.897972  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:38.898064  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:38.928095  620795 cri.go:89] found id: ""
	I1213 12:06:38.928121  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.928131  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:38.928138  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:38.928202  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:38.954066  620795 cri.go:89] found id: ""
	I1213 12:06:38.954090  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.954098  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:38.954105  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:38.954168  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:38.978723  620795 cri.go:89] found id: ""
	I1213 12:06:38.978752  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.978762  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:38.978769  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:38.978825  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:39.006341  620795 cri.go:89] found id: ""
	I1213 12:06:39.006374  620795 logs.go:282] 0 containers: []
	W1213 12:06:39.006383  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:39.006390  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:39.006462  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:39.032585  620795 cri.go:89] found id: ""
	I1213 12:06:39.032612  620795 logs.go:282] 0 containers: []
	W1213 12:06:39.032622  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:39.032629  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:39.032699  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:39.061395  620795 cri.go:89] found id: ""
	I1213 12:06:39.061426  620795 logs.go:282] 0 containers: []
	W1213 12:06:39.061436  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:39.061446  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:39.061457  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:39.091343  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:39.091367  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:39.160940  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:39.160987  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:39.177451  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:39.177490  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:39.246489  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:39.238660    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.239263    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.241330    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.241646    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.243151    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:39.238660    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.239263    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.241330    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.241646    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.243151    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:39.246510  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:39.246524  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:41.775639  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:41.794476  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:41.794600  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:41.831000  620795 cri.go:89] found id: ""
	I1213 12:06:41.831074  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.831102  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:41.831121  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:41.831203  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:41.872779  620795 cri.go:89] found id: ""
	I1213 12:06:41.872806  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.872816  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:41.872823  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:41.872903  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:41.902394  620795 cri.go:89] found id: ""
	I1213 12:06:41.902420  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.902429  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:41.902435  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:41.902494  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:41.929459  620795 cri.go:89] found id: ""
	I1213 12:06:41.929485  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.929494  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:41.929501  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:41.929563  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:41.955676  620795 cri.go:89] found id: ""
	I1213 12:06:41.955700  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.955716  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:41.955724  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:41.955783  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:41.981839  620795 cri.go:89] found id: ""
	I1213 12:06:41.981865  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.981875  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:41.981882  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:41.981939  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:42.021720  620795 cri.go:89] found id: ""
	I1213 12:06:42.021808  620795 logs.go:282] 0 containers: []
	W1213 12:06:42.021827  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:42.021836  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:42.021908  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:42.052304  620795 cri.go:89] found id: ""
	I1213 12:06:42.052332  620795 logs.go:282] 0 containers: []
	W1213 12:06:42.052341  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:42.052351  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:42.052382  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:42.071214  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:42.071250  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:42.151103  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:42.141536    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.142506    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.144362    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.144822    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.146635    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:42.141536    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.142506    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.144362    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.144822    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.146635    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:42.151127  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:42.151146  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:42.183473  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:42.183646  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:42.226797  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:42.226834  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:44.796943  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:44.821281  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:44.821413  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:44.863598  620795 cri.go:89] found id: ""
	I1213 12:06:44.863672  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.863697  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:44.863718  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:44.863805  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:44.892309  620795 cri.go:89] found id: ""
	I1213 12:06:44.892395  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.892418  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:44.892438  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:44.892552  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:44.918444  620795 cri.go:89] found id: ""
	I1213 12:06:44.918522  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.918557  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:44.918581  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:44.918673  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:44.944223  620795 cri.go:89] found id: ""
	I1213 12:06:44.944249  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.944258  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:44.944265  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:44.944327  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:44.970515  620795 cri.go:89] found id: ""
	I1213 12:06:44.970548  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.970559  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:44.970566  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:44.970626  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:44.996938  620795 cri.go:89] found id: ""
	I1213 12:06:44.996966  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.996976  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:44.996983  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:44.997050  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:45.050971  620795 cri.go:89] found id: ""
	I1213 12:06:45.051001  620795 logs.go:282] 0 containers: []
	W1213 12:06:45.051020  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:45.051028  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:45.051107  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:45.095037  620795 cri.go:89] found id: ""
	I1213 12:06:45.095076  620795 logs.go:282] 0 containers: []
	W1213 12:06:45.095087  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:45.095098  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:45.095116  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:45.209528  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:45.209618  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:45.240275  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:45.240311  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:45.322872  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:45.312425    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.313157    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.314727    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.315938    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.316890    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:45.312425    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.313157    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.314727    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.315938    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.316890    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:45.322895  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:45.322909  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:45.353126  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:45.353162  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:47.883672  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:47.894317  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:47.894394  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:47.920883  620795 cri.go:89] found id: ""
	I1213 12:06:47.920909  620795 logs.go:282] 0 containers: []
	W1213 12:06:47.920919  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:47.920927  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:47.920985  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:47.947168  620795 cri.go:89] found id: ""
	I1213 12:06:47.947197  620795 logs.go:282] 0 containers: []
	W1213 12:06:47.947207  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:47.947214  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:47.947279  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:47.972678  620795 cri.go:89] found id: ""
	I1213 12:06:47.972701  620795 logs.go:282] 0 containers: []
	W1213 12:06:47.972710  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:47.972717  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:47.972779  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:48.010849  620795 cri.go:89] found id: ""
	I1213 12:06:48.010915  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.010939  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:48.010961  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:48.011038  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:48.040005  620795 cri.go:89] found id: ""
	I1213 12:06:48.040074  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.040098  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:48.040118  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:48.040211  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:48.067778  620795 cri.go:89] found id: ""
	I1213 12:06:48.067806  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.067815  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:48.067822  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:48.067884  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:48.096165  620795 cri.go:89] found id: ""
	I1213 12:06:48.096207  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.096218  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:48.096224  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:48.096297  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:48.123725  620795 cri.go:89] found id: ""
	I1213 12:06:48.123761  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.123771  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:48.123781  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:48.123793  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:48.153693  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:48.153733  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:48.185148  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:48.185227  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:48.251689  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:48.251724  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:48.269048  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:48.269079  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:48.336435  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:48.328704    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.329312    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.330862    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.331331    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.332839    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:48.328704    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.329312    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.330862    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.331331    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.332839    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:50.836744  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:50.848522  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:50.848593  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:50.874981  620795 cri.go:89] found id: ""
	I1213 12:06:50.875065  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.875088  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:50.875108  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:50.875219  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:50.900176  620795 cri.go:89] found id: ""
	I1213 12:06:50.900203  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.900213  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:50.900219  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:50.900277  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:50.929844  620795 cri.go:89] found id: ""
	I1213 12:06:50.929869  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.929878  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:50.929885  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:50.929943  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:50.955008  620795 cri.go:89] found id: ""
	I1213 12:06:50.955033  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.955042  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:50.955049  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:50.955104  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:50.982109  620795 cri.go:89] found id: ""
	I1213 12:06:50.982134  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.982143  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:50.982149  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:50.982211  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:51.013066  620795 cri.go:89] found id: ""
	I1213 12:06:51.013144  620795 logs.go:282] 0 containers: []
	W1213 12:06:51.013160  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:51.013168  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:51.013236  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:51.042207  620795 cri.go:89] found id: ""
	I1213 12:06:51.042233  620795 logs.go:282] 0 containers: []
	W1213 12:06:51.042243  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:51.042250  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:51.042315  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:51.068089  620795 cri.go:89] found id: ""
	I1213 12:06:51.068116  620795 logs.go:282] 0 containers: []
	W1213 12:06:51.068125  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:51.068135  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:51.068146  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:51.136510  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:51.136550  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:51.153539  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:51.153567  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:51.227168  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:51.219231    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.219823    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.221668    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.222081    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.223742    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:51.219231    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.219823    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.221668    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.222081    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.223742    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:51.227240  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:51.227271  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:51.256505  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:51.256541  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:53.786599  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:53.808412  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:53.808498  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:53.866097  620795 cri.go:89] found id: ""
	I1213 12:06:53.866124  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.866133  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:53.866140  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:53.866197  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:53.896398  620795 cri.go:89] found id: ""
	I1213 12:06:53.896426  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.896435  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:53.896442  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:53.896499  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:53.922228  620795 cri.go:89] found id: ""
	I1213 12:06:53.922255  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.922265  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:53.922271  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:53.922333  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:53.947081  620795 cri.go:89] found id: ""
	I1213 12:06:53.947107  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.947116  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:53.947123  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:53.947177  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:53.972340  620795 cri.go:89] found id: ""
	I1213 12:06:53.972365  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.972374  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:53.972381  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:53.972437  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:54.000806  620795 cri.go:89] found id: ""
	I1213 12:06:54.000835  620795 logs.go:282] 0 containers: []
	W1213 12:06:54.000844  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:54.000851  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:54.000925  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:54.030584  620795 cri.go:89] found id: ""
	I1213 12:06:54.030617  620795 logs.go:282] 0 containers: []
	W1213 12:06:54.030626  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:54.030648  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:54.030734  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:54.056807  620795 cri.go:89] found id: ""
	I1213 12:06:54.056833  620795 logs.go:282] 0 containers: []
	W1213 12:06:54.056842  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:54.056877  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:54.056897  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:54.122299  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:54.122347  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:54.139911  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:54.139944  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:54.202433  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:54.194761    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.195486    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.197123    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.197444    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.198946    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:54.194761    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.195486    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.197123    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.197444    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.198946    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:54.202453  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:54.202466  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:54.230939  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:54.230977  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:56.761244  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:56.773199  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:56.773280  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:56.833295  620795 cri.go:89] found id: ""
	I1213 12:06:56.833323  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.833338  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:56.833345  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:56.833410  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:56.877141  620795 cri.go:89] found id: ""
	I1213 12:06:56.877179  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.877189  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:56.877195  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:56.877255  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:56.909304  620795 cri.go:89] found id: ""
	I1213 12:06:56.909329  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.909337  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:56.909344  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:56.909402  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:56.937175  620795 cri.go:89] found id: ""
	I1213 12:06:56.937206  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.937215  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:56.937222  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:56.937283  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:56.962816  620795 cri.go:89] found id: ""
	I1213 12:06:56.962839  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.962848  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:56.962854  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:56.962909  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:56.988340  620795 cri.go:89] found id: ""
	I1213 12:06:56.988364  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.988372  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:56.988379  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:56.988438  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:57.014873  620795 cri.go:89] found id: ""
	I1213 12:06:57.014956  620795 logs.go:282] 0 containers: []
	W1213 12:06:57.014979  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:57.014997  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:57.015107  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:57.042222  620795 cri.go:89] found id: ""
	I1213 12:06:57.042295  620795 logs.go:282] 0 containers: []
	W1213 12:06:57.042331  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:57.042357  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:57.042383  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:57.070110  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:57.070148  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:57.097788  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:57.097812  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:57.164029  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:57.164067  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:57.182586  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:57.182619  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:57.253568  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:57.245349    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.246144    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.247745    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.248303    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.249920    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:57.245349    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.246144    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.247745    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.248303    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.249920    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:59.753877  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:59.764872  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:59.764943  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:59.794978  620795 cri.go:89] found id: ""
	I1213 12:06:59.795002  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.795016  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:59.795027  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:59.795086  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:59.832235  620795 cri.go:89] found id: ""
	I1213 12:06:59.832264  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.832276  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:59.832283  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:59.832342  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:59.879189  620795 cri.go:89] found id: ""
	I1213 12:06:59.879217  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.879227  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:59.879233  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:59.879296  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:59.906738  620795 cri.go:89] found id: ""
	I1213 12:06:59.906766  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.906775  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:59.906782  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:59.906838  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:59.934746  620795 cri.go:89] found id: ""
	I1213 12:06:59.934774  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.934783  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:59.934790  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:59.934852  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:59.962016  620795 cri.go:89] found id: ""
	I1213 12:06:59.962049  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.962059  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:59.962066  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:59.962123  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:59.988024  620795 cri.go:89] found id: ""
	I1213 12:06:59.988047  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.988056  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:59.988062  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:59.988118  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:00.062022  620795 cri.go:89] found id: ""
	I1213 12:07:00.062049  620795 logs.go:282] 0 containers: []
	W1213 12:07:00.062059  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:00.062076  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:00.062094  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:00.179599  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:00.181365  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:00.211914  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:00.211958  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:00.303311  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:00.290980    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.291674    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.293924    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.295005    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.295928    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:00.290980    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.291674    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.293924    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.295005    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.295928    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:00.303333  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:00.303347  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:00.339996  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:00.340039  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:02.882696  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:02.898926  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:02.899000  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:02.928919  620795 cri.go:89] found id: ""
	I1213 12:07:02.928949  620795 logs.go:282] 0 containers: []
	W1213 12:07:02.928959  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:02.928967  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:02.929030  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:02.955168  620795 cri.go:89] found id: ""
	I1213 12:07:02.955194  620795 logs.go:282] 0 containers: []
	W1213 12:07:02.955209  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:02.955215  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:02.955273  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:02.984105  620795 cri.go:89] found id: ""
	I1213 12:07:02.984132  620795 logs.go:282] 0 containers: []
	W1213 12:07:02.984141  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:02.984159  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:02.984220  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:03.011185  620795 cri.go:89] found id: ""
	I1213 12:07:03.011210  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.011219  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:03.011227  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:03.011289  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:03.038557  620795 cri.go:89] found id: ""
	I1213 12:07:03.038580  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.038588  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:03.038594  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:03.038656  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:03.064610  620795 cri.go:89] found id: ""
	I1213 12:07:03.064650  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.064661  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:03.064667  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:03.064725  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:03.090406  620795 cri.go:89] found id: ""
	I1213 12:07:03.090432  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.090441  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:03.090447  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:03.090506  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:03.117733  620795 cri.go:89] found id: ""
	I1213 12:07:03.117761  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.117770  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:03.117780  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:03.117792  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:03.185975  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:03.177634    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.178390    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.180015    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.180554    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.182089    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:03.177634    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.178390    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.180015    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.180554    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.182089    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:03.185999  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:03.186011  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:03.214353  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:03.214387  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:03.244844  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:03.244873  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:03.310569  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:03.310608  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:05.828010  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:05.840499  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:05.840570  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:05.867194  620795 cri.go:89] found id: ""
	I1213 12:07:05.867272  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.867295  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:05.867314  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:05.867394  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:05.894013  620795 cri.go:89] found id: ""
	I1213 12:07:05.894044  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.894054  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:05.894061  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:05.894126  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:05.920207  620795 cri.go:89] found id: ""
	I1213 12:07:05.920234  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.920244  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:05.920250  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:05.920309  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:05.948255  620795 cri.go:89] found id: ""
	I1213 12:07:05.948280  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.948289  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:05.948295  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:05.948352  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:05.975137  620795 cri.go:89] found id: ""
	I1213 12:07:05.975162  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.975211  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:05.975222  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:05.975283  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:06.006992  620795 cri.go:89] found id: ""
	I1213 12:07:06.007020  620795 logs.go:282] 0 containers: []
	W1213 12:07:06.007030  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:06.007037  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:06.007106  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:06.035032  620795 cri.go:89] found id: ""
	I1213 12:07:06.035067  620795 logs.go:282] 0 containers: []
	W1213 12:07:06.035077  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:06.035084  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:06.035157  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:06.066833  620795 cri.go:89] found id: ""
	I1213 12:07:06.066865  620795 logs.go:282] 0 containers: []
	W1213 12:07:06.066875  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:06.066885  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:06.066899  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:06.134254  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:06.125473    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.125887    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.127536    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.128260    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.129881    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:06.125473    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.125887    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.127536    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.128260    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.129881    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:06.134284  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:06.134297  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:06.163816  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:06.163852  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:06.194055  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:06.194084  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:06.262450  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:06.262550  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:08.779798  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:08.793568  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:08.793654  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:08.848358  620795 cri.go:89] found id: ""
	I1213 12:07:08.848399  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.848408  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:08.848415  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:08.848485  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:08.881239  620795 cri.go:89] found id: ""
	I1213 12:07:08.881268  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.881278  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:08.881284  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:08.881358  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:08.912007  620795 cri.go:89] found id: ""
	I1213 12:07:08.912038  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.912059  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:08.912070  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:08.912143  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:08.948718  620795 cri.go:89] found id: ""
	I1213 12:07:08.948744  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.948754  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:08.948760  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:08.948815  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:08.974195  620795 cri.go:89] found id: ""
	I1213 12:07:08.974224  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.974234  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:08.974240  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:08.974298  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:09.000368  620795 cri.go:89] found id: ""
	I1213 12:07:09.000409  620795 logs.go:282] 0 containers: []
	W1213 12:07:09.000420  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:09.000428  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:09.000500  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:09.027504  620795 cri.go:89] found id: ""
	I1213 12:07:09.027539  620795 logs.go:282] 0 containers: []
	W1213 12:07:09.027548  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:09.027554  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:09.027611  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:09.052844  620795 cri.go:89] found id: ""
	I1213 12:07:09.052870  620795 logs.go:282] 0 containers: []
	W1213 12:07:09.052879  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:09.052888  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:09.052899  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:09.080443  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:09.080483  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:09.109721  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:09.109747  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:09.174545  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:09.174581  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:09.192943  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:09.192974  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:09.256162  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:09.248263    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.248774    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.250435    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.251054    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.252736    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:09.248263    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.248774    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.250435    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.251054    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.252736    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:11.756459  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:11.766714  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:11.766784  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:11.797701  620795 cri.go:89] found id: ""
	I1213 12:07:11.797728  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.797737  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:11.797753  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:11.797832  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:11.833489  620795 cri.go:89] found id: ""
	I1213 12:07:11.833563  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.833585  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:11.833604  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:11.833692  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:11.869283  620795 cri.go:89] found id: ""
	I1213 12:07:11.869305  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.869314  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:11.869320  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:11.869376  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:11.899820  620795 cri.go:89] found id: ""
	I1213 12:07:11.899845  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.899855  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:11.899862  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:11.899925  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:11.926125  620795 cri.go:89] found id: ""
	I1213 12:07:11.926150  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.926159  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:11.926166  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:11.926224  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:11.952049  620795 cri.go:89] found id: ""
	I1213 12:07:11.952131  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.952165  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:11.952178  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:11.952250  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:11.982382  620795 cri.go:89] found id: ""
	I1213 12:07:11.982407  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.982415  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:11.982421  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:11.982494  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:12.014887  620795 cri.go:89] found id: ""
	I1213 12:07:12.014912  620795 logs.go:282] 0 containers: []
	W1213 12:07:12.014921  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:12.014931  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:12.014943  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:12.080370  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:12.080407  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:12.097493  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:12.097534  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:12.163658  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:12.155544    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.156277    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.157926    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.158224    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.159755    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:12.155544    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.156277    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.157926    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.158224    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.159755    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:12.163680  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:12.163692  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:12.192505  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:12.192544  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:14.721085  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:14.731999  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:14.732070  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:14.758997  620795 cri.go:89] found id: ""
	I1213 12:07:14.759023  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.759032  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:14.759039  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:14.759098  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:14.831264  620795 cri.go:89] found id: ""
	I1213 12:07:14.831294  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.831303  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:14.831310  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:14.831366  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:14.882934  620795 cri.go:89] found id: ""
	I1213 12:07:14.882964  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.882973  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:14.882980  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:14.883040  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:14.916858  620795 cri.go:89] found id: ""
	I1213 12:07:14.916888  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.916898  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:14.916905  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:14.916969  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:14.942297  620795 cri.go:89] found id: ""
	I1213 12:07:14.942334  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.942343  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:14.942355  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:14.942431  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:14.967905  620795 cri.go:89] found id: ""
	I1213 12:07:14.967927  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.967936  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:14.967942  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:14.968000  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:14.993041  620795 cri.go:89] found id: ""
	I1213 12:07:14.993107  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.993131  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:14.993145  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:14.993224  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:15.027730  620795 cri.go:89] found id: ""
	I1213 12:07:15.027755  620795 logs.go:282] 0 containers: []
	W1213 12:07:15.027765  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:15.027776  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:15.027789  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:15.095470  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:15.095507  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:15.113485  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:15.113567  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:15.183456  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:15.174486    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.175343    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.177179    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.177821    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.179398    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:15.174486    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.175343    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.177179    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.177821    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.179398    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:15.183481  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:15.183497  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:15.212670  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:15.212706  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:17.745028  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:17.755868  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:17.755965  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:17.830528  620795 cri.go:89] found id: ""
	I1213 12:07:17.830551  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.830559  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:17.830585  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:17.830654  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:17.866003  620795 cri.go:89] found id: ""
	I1213 12:07:17.866029  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.866038  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:17.866044  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:17.866102  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:17.891564  620795 cri.go:89] found id: ""
	I1213 12:07:17.891588  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.891597  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:17.891603  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:17.891664  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:17.918740  620795 cri.go:89] found id: ""
	I1213 12:07:17.918768  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.918776  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:17.918783  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:17.918845  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:17.950736  620795 cri.go:89] found id: ""
	I1213 12:07:17.950774  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.950784  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:17.950790  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:17.950854  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:17.976775  620795 cri.go:89] found id: ""
	I1213 12:07:17.976799  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.976809  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:17.976816  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:17.976883  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:18.008430  620795 cri.go:89] found id: ""
	I1213 12:07:18.008460  620795 logs.go:282] 0 containers: []
	W1213 12:07:18.008469  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:18.008477  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:18.008564  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:18.037446  620795 cri.go:89] found id: ""
	I1213 12:07:18.037477  620795 logs.go:282] 0 containers: []
	W1213 12:07:18.037488  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:18.037502  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:18.037517  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:18.068414  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:18.068443  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:18.138588  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:18.138627  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:18.155698  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:18.155729  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:18.222792  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:18.215479    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.215981    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.217571    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.217896    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.219409    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:18.215479    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.215981    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.217571    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.217896    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.219409    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:18.222835  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:18.222847  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:20.751476  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:20.762121  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:20.762190  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:20.818771  620795 cri.go:89] found id: ""
	I1213 12:07:20.818794  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.818803  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:20.818810  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:20.818877  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:20.873533  620795 cri.go:89] found id: ""
	I1213 12:07:20.873556  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.873564  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:20.873581  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:20.873639  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:20.900689  620795 cri.go:89] found id: ""
	I1213 12:07:20.900716  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.900725  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:20.900732  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:20.900790  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:20.926298  620795 cri.go:89] found id: ""
	I1213 12:07:20.926324  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.926334  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:20.926340  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:20.926400  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:20.955692  620795 cri.go:89] found id: ""
	I1213 12:07:20.955767  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.955789  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:20.955808  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:20.955904  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:20.981101  620795 cri.go:89] found id: ""
	I1213 12:07:20.981126  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.981135  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:20.981146  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:20.981208  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:21.012906  620795 cri.go:89] found id: ""
	I1213 12:07:21.012933  620795 logs.go:282] 0 containers: []
	W1213 12:07:21.012942  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:21.012949  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:21.013024  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:21.043717  620795 cri.go:89] found id: ""
	I1213 12:07:21.043743  620795 logs.go:282] 0 containers: []
	W1213 12:07:21.043753  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:21.043764  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:21.043776  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:21.116319  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:21.116368  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:21.133173  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:21.133204  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:21.201103  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:21.193228    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.194101    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.195701    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.196170    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.197510    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:21.193228    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.194101    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.195701    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.196170    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.197510    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:21.201127  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:21.201140  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:21.229422  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:21.229457  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:23.763349  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:23.781088  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:23.781159  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:23.857623  620795 cri.go:89] found id: ""
	I1213 12:07:23.857648  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.857666  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:23.857673  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:23.857736  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:23.882807  620795 cri.go:89] found id: ""
	I1213 12:07:23.882833  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.882842  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:23.882849  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:23.882907  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:23.908402  620795 cri.go:89] found id: ""
	I1213 12:07:23.908430  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.908440  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:23.908447  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:23.908506  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:23.933800  620795 cri.go:89] found id: ""
	I1213 12:07:23.933826  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.933835  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:23.933841  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:23.933919  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:23.959222  620795 cri.go:89] found id: ""
	I1213 12:07:23.959248  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.959259  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:23.959266  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:23.959352  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:23.985470  620795 cri.go:89] found id: ""
	I1213 12:07:23.985496  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.985505  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:23.985512  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:23.985570  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:24.014442  620795 cri.go:89] found id: ""
	I1213 12:07:24.014477  620795 logs.go:282] 0 containers: []
	W1213 12:07:24.014487  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:24.014494  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:24.014556  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:24.043282  620795 cri.go:89] found id: ""
	I1213 12:07:24.043308  620795 logs.go:282] 0 containers: []
	W1213 12:07:24.043318  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:24.043328  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:24.043340  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:24.075046  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:24.075073  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:24.143658  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:24.143701  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:24.160736  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:24.160765  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:24.224652  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:24.215949    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.216643    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.218385    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.218972    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.220693    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:24.215949    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.216643    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.218385    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.218972    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.220693    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:24.224675  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:24.224692  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:26.754848  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:26.765356  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:26.765429  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:26.818982  620795 cri.go:89] found id: ""
	I1213 12:07:26.819005  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.819013  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:26.819020  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:26.819078  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:26.871231  620795 cri.go:89] found id: ""
	I1213 12:07:26.871253  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.871262  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:26.871268  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:26.871326  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:26.898363  620795 cri.go:89] found id: ""
	I1213 12:07:26.898443  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.898467  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:26.898486  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:26.898578  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:26.923840  620795 cri.go:89] found id: ""
	I1213 12:07:26.923866  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.923875  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:26.923882  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:26.923940  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:26.952921  620795 cri.go:89] found id: ""
	I1213 12:07:26.952950  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.952960  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:26.952967  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:26.953028  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:26.984162  620795 cri.go:89] found id: ""
	I1213 12:07:26.984188  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.984197  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:26.984203  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:26.984282  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:27.022329  620795 cri.go:89] found id: ""
	I1213 12:07:27.022397  620795 logs.go:282] 0 containers: []
	W1213 12:07:27.022413  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:27.022420  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:27.022479  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:27.048366  620795 cri.go:89] found id: ""
	I1213 12:07:27.048391  620795 logs.go:282] 0 containers: []
	W1213 12:07:27.048401  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:27.048410  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:27.048423  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:27.076996  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:27.077029  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:27.149458  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:27.149509  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:27.167444  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:27.167473  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:27.235232  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:27.227331    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.227820    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.229697    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.230220    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.231699    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:27.227331    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.227820    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.229697    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.230220    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.231699    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:27.235258  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:27.235270  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:29.764538  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:29.791446  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:29.791560  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:29.844876  620795 cri.go:89] found id: ""
	I1213 12:07:29.844953  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.844976  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:29.844996  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:29.845082  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:29.884357  620795 cri.go:89] found id: ""
	I1213 12:07:29.884423  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.884441  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:29.884449  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:29.884508  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:29.914712  620795 cri.go:89] found id: ""
	I1213 12:07:29.914738  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.914748  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:29.914755  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:29.914813  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:29.940420  620795 cri.go:89] found id: ""
	I1213 12:07:29.940500  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.940516  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:29.940524  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:29.940585  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:29.970378  620795 cri.go:89] found id: ""
	I1213 12:07:29.970404  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.970413  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:29.970420  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:29.970478  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:29.996803  620795 cri.go:89] found id: ""
	I1213 12:07:29.996881  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.996898  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:29.996907  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:29.996983  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:30.040874  620795 cri.go:89] found id: ""
	I1213 12:07:30.040904  620795 logs.go:282] 0 containers: []
	W1213 12:07:30.040913  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:30.040920  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:30.040995  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:30.083632  620795 cri.go:89] found id: ""
	I1213 12:07:30.083658  620795 logs.go:282] 0 containers: []
	W1213 12:07:30.083667  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:30.083676  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:30.083689  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:30.149516  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:30.149553  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:30.167731  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:30.167816  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:30.233503  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:30.225039   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.225442   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.227057   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.227805   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.229579   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:30.225039   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.225442   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.227057   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.227805   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.229579   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:30.233567  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:30.233586  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:30.263464  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:30.263497  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:32.796303  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:32.813180  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:32.813263  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:32.849335  620795 cri.go:89] found id: ""
	I1213 12:07:32.849413  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.849456  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:32.849481  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:32.849570  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:32.880068  620795 cri.go:89] found id: ""
	I1213 12:07:32.880092  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.880101  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:32.880107  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:32.880165  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:32.907166  620795 cri.go:89] found id: ""
	I1213 12:07:32.907193  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.907202  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:32.907209  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:32.907266  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:32.933296  620795 cri.go:89] found id: ""
	I1213 12:07:32.933366  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.933388  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:32.933407  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:32.933500  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:32.959040  620795 cri.go:89] found id: ""
	I1213 12:07:32.959106  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.959130  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:32.959149  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:32.959233  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:32.989508  620795 cri.go:89] found id: ""
	I1213 12:07:32.989531  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.989540  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:32.989546  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:32.989629  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:33.018978  620795 cri.go:89] found id: ""
	I1213 12:07:33.019002  620795 logs.go:282] 0 containers: []
	W1213 12:07:33.019010  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:33.019017  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:33.019098  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:33.046327  620795 cri.go:89] found id: ""
	I1213 12:07:33.046359  620795 logs.go:282] 0 containers: []
	W1213 12:07:33.046368  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:33.046378  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:33.046419  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:33.075176  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:33.075213  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:33.107277  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:33.107309  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:33.174349  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:33.174384  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:33.192737  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:33.192770  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:33.259992  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:33.251960   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.252364   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.253955   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.254311   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.255985   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:33.251960   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.252364   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.253955   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.254311   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.255985   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:35.760267  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:35.771899  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:35.771965  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:35.816451  620795 cri.go:89] found id: ""
	I1213 12:07:35.816499  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.816508  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:35.816519  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:35.816576  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:35.874010  620795 cri.go:89] found id: ""
	I1213 12:07:35.874031  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.874040  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:35.874046  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:35.874109  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:35.901470  620795 cri.go:89] found id: ""
	I1213 12:07:35.901499  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.901509  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:35.901515  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:35.901577  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:35.929967  620795 cri.go:89] found id: ""
	I1213 12:07:35.929988  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.929997  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:35.930004  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:35.930061  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:35.959220  620795 cri.go:89] found id: ""
	I1213 12:07:35.959245  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.959255  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:35.959262  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:35.959323  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:35.988889  620795 cri.go:89] found id: ""
	I1213 12:07:35.988916  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.988925  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:35.988932  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:35.988990  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:36.017868  620795 cri.go:89] found id: ""
	I1213 12:07:36.017896  620795 logs.go:282] 0 containers: []
	W1213 12:07:36.017906  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:36.017912  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:36.017975  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:36.046482  620795 cri.go:89] found id: ""
	I1213 12:07:36.046508  620795 logs.go:282] 0 containers: []
	W1213 12:07:36.046517  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:36.046527  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:36.046539  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:36.063480  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:36.063675  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:36.134374  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:36.125215   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.125817   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.127378   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.127950   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.129158   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:36.125215   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.125817   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.127378   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.127950   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.129158   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:36.134437  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:36.134465  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:36.164786  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:36.164831  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:36.195048  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:36.195077  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:38.762384  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:38.773774  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:38.773860  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:38.823096  620795 cri.go:89] found id: ""
	I1213 12:07:38.823118  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.823127  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:38.823133  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:38.823192  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:38.859735  620795 cri.go:89] found id: ""
	I1213 12:07:38.859758  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.859766  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:38.859773  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:38.859832  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:38.888780  620795 cri.go:89] found id: ""
	I1213 12:07:38.888806  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.888815  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:38.888821  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:38.888885  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:38.918480  620795 cri.go:89] found id: ""
	I1213 12:07:38.918506  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.918516  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:38.918522  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:38.918579  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:38.944442  620795 cri.go:89] found id: ""
	I1213 12:07:38.944475  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.944485  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:38.944492  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:38.944548  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:38.972111  620795 cri.go:89] found id: ""
	I1213 12:07:38.972138  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.972148  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:38.972156  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:38.972217  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:38.999220  620795 cri.go:89] found id: ""
	I1213 12:07:38.999249  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.999259  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:38.999266  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:38.999387  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:39.027462  620795 cri.go:89] found id: ""
	I1213 12:07:39.027489  620795 logs.go:282] 0 containers: []
	W1213 12:07:39.027498  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:39.027508  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:39.027551  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:39.045387  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:39.045421  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:39.113555  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:39.104411   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.105461   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.106402   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.108045   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.108696   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:39.104411   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.105461   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.106402   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.108045   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.108696   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:39.113577  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:39.113591  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:39.141868  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:39.141905  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:39.170660  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:39.170687  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:41.738914  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:41.749712  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:41.749788  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:41.815733  620795 cri.go:89] found id: ""
	I1213 12:07:41.815757  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.815767  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:41.815774  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:41.815837  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:41.853772  620795 cri.go:89] found id: ""
	I1213 12:07:41.853794  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.853802  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:41.853808  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:41.853864  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:41.880989  620795 cri.go:89] found id: ""
	I1213 12:07:41.881012  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.881021  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:41.881027  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:41.881085  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:41.910432  620795 cri.go:89] found id: ""
	I1213 12:07:41.910455  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.910464  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:41.910470  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:41.910525  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:41.938539  620795 cri.go:89] found id: ""
	I1213 12:07:41.938561  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.938570  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:41.938576  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:41.938636  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:41.964574  620795 cri.go:89] found id: ""
	I1213 12:07:41.964608  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.964617  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:41.964624  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:41.964681  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:41.989355  620795 cri.go:89] found id: ""
	I1213 12:07:41.989380  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.989389  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:41.989396  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:41.989456  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:42.019802  620795 cri.go:89] found id: ""
	I1213 12:07:42.019830  620795 logs.go:282] 0 containers: []
	W1213 12:07:42.019839  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:42.019849  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:42.019861  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:42.052058  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:42.052087  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:42.123300  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:42.123360  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:42.144729  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:42.144768  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:42.227868  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:42.217286   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.218234   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.220463   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.221227   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.223007   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:42.217286   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.218234   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.220463   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.221227   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.223007   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:42.227896  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:42.227910  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:44.760193  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:44.770916  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:44.770989  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:44.803100  620795 cri.go:89] found id: ""
	I1213 12:07:44.803124  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.803133  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:44.803140  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:44.803195  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:44.851212  620795 cri.go:89] found id: ""
	I1213 12:07:44.851235  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.851244  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:44.851250  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:44.851307  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:44.902052  620795 cri.go:89] found id: ""
	I1213 12:07:44.902075  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.902084  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:44.902090  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:44.902150  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:44.933898  620795 cri.go:89] found id: ""
	I1213 12:07:44.933926  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.933935  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:44.933942  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:44.934026  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:44.963132  620795 cri.go:89] found id: ""
	I1213 12:07:44.963158  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.963167  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:44.963174  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:44.963261  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:44.988132  620795 cri.go:89] found id: ""
	I1213 12:07:44.988163  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.988174  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:44.988181  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:44.988238  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:45.046906  620795 cri.go:89] found id: ""
	I1213 12:07:45.046934  620795 logs.go:282] 0 containers: []
	W1213 12:07:45.046943  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:45.046951  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:45.047019  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:45.080632  620795 cri.go:89] found id: ""
	I1213 12:07:45.080730  620795 logs.go:282] 0 containers: []
	W1213 12:07:45.080752  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:45.080792  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:45.080810  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:45.157685  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:45.157797  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:45.212507  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:45.212574  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:45.292666  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:45.284764   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.285529   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.287091   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.287398   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.288940   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:45.284764   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.285529   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.287091   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.287398   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.288940   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:45.292707  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:45.292720  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:45.321658  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:45.321690  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:47.858977  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:47.870353  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:47.870425  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:47.902849  620795 cri.go:89] found id: ""
	I1213 12:07:47.902874  620795 logs.go:282] 0 containers: []
	W1213 12:07:47.902883  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:47.902890  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:47.902958  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:47.928841  620795 cri.go:89] found id: ""
	I1213 12:07:47.928866  620795 logs.go:282] 0 containers: []
	W1213 12:07:47.928875  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:47.928882  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:47.928943  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:47.954469  620795 cri.go:89] found id: ""
	I1213 12:07:47.954494  620795 logs.go:282] 0 containers: []
	W1213 12:07:47.954503  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:47.954510  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:47.954571  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:47.984225  620795 cri.go:89] found id: ""
	I1213 12:07:47.984248  620795 logs.go:282] 0 containers: []
	W1213 12:07:47.984257  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:47.984263  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:47.984327  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:48.013666  620795 cri.go:89] found id: ""
	I1213 12:07:48.013694  620795 logs.go:282] 0 containers: []
	W1213 12:07:48.013704  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:48.013710  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:48.013776  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:48.043313  620795 cri.go:89] found id: ""
	I1213 12:07:48.043341  620795 logs.go:282] 0 containers: []
	W1213 12:07:48.043351  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:48.043358  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:48.043445  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:48.070641  620795 cri.go:89] found id: ""
	I1213 12:07:48.070669  620795 logs.go:282] 0 containers: []
	W1213 12:07:48.070680  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:48.070687  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:48.070767  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:48.096729  620795 cri.go:89] found id: ""
	I1213 12:07:48.096754  620795 logs.go:282] 0 containers: []
	W1213 12:07:48.096764  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:48.096773  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:48.096785  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:48.129289  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:48.129318  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:48.196743  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:48.196781  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:48.213775  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:48.213802  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:48.282000  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:48.273477   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.274412   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.276291   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.276931   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.278357   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:48.273477   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.274412   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.276291   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.276931   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.278357   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:48.282076  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:48.282104  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:50.813946  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:50.834838  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:50.834928  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:50.871307  620795 cri.go:89] found id: ""
	I1213 12:07:50.871329  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.871337  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:50.871343  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:50.871400  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:50.900887  620795 cri.go:89] found id: ""
	I1213 12:07:50.900913  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.900922  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:50.900929  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:50.900987  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:50.926497  620795 cri.go:89] found id: ""
	I1213 12:07:50.926569  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.926606  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:50.926631  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:50.926721  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:50.954230  620795 cri.go:89] found id: ""
	I1213 12:07:50.954256  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.954266  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:50.954273  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:50.954331  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:50.980389  620795 cri.go:89] found id: ""
	I1213 12:07:50.980414  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.980425  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:50.980431  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:50.980490  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:51.007396  620795 cri.go:89] found id: ""
	I1213 12:07:51.007423  620795 logs.go:282] 0 containers: []
	W1213 12:07:51.007433  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:51.007444  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:51.007507  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:51.038515  620795 cri.go:89] found id: ""
	I1213 12:07:51.038540  620795 logs.go:282] 0 containers: []
	W1213 12:07:51.038550  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:51.038556  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:51.038611  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:51.066063  620795 cri.go:89] found id: ""
	I1213 12:07:51.066088  620795 logs.go:282] 0 containers: []
	W1213 12:07:51.066096  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:51.066111  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:51.066122  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:51.131363  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:51.131402  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:51.148223  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:51.148253  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:51.211768  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:51.204250   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.204888   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.206374   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.206860   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.208288   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:51.204250   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.204888   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.206374   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.206860   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.208288   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:51.211791  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:51.211807  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:51.239792  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:51.239825  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:53.772909  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:53.794190  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:53.794255  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:53.863195  620795 cri.go:89] found id: ""
	I1213 12:07:53.863228  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.863239  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:53.863246  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:53.863323  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:53.894744  620795 cri.go:89] found id: ""
	I1213 12:07:53.894812  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.894836  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:53.894855  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:53.894941  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:53.922176  620795 cri.go:89] found id: ""
	I1213 12:07:53.922244  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.922266  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:53.922284  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:53.922371  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:53.948409  620795 cri.go:89] found id: ""
	I1213 12:07:53.948437  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.948446  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:53.948453  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:53.948512  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:53.974142  620795 cri.go:89] found id: ""
	I1213 12:07:53.974222  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.974244  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:53.974263  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:53.974369  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:54.002307  620795 cri.go:89] found id: ""
	I1213 12:07:54.002343  620795 logs.go:282] 0 containers: []
	W1213 12:07:54.002353  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:54.002361  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:54.002440  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:54.030334  620795 cri.go:89] found id: ""
	I1213 12:07:54.030413  620795 logs.go:282] 0 containers: []
	W1213 12:07:54.030438  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:54.030457  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:54.030566  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:54.056614  620795 cri.go:89] found id: ""
	I1213 12:07:54.056697  620795 logs.go:282] 0 containers: []
	W1213 12:07:54.056713  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:54.056724  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:54.056737  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:54.124215  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:54.124253  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:54.141024  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:54.141052  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:54.203423  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:54.195491   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.196247   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.197856   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.198486   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.200023   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:54.195491   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.196247   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.197856   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.198486   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.200023   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:54.203445  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:54.203457  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:54.231323  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:54.231355  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:56.762827  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:56.786084  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:56.786208  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:56.855486  620795 cri.go:89] found id: ""
	I1213 12:07:56.855531  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.855542  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:56.855549  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:56.855615  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:56.883436  620795 cri.go:89] found id: ""
	I1213 12:07:56.883531  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.883557  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:56.883587  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:56.883648  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:56.908626  620795 cri.go:89] found id: ""
	I1213 12:07:56.908708  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.908739  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:56.908752  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:56.908821  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:56.935174  620795 cri.go:89] found id: ""
	I1213 12:07:56.935201  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.935210  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:56.935217  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:56.935302  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:56.964101  620795 cri.go:89] found id: ""
	I1213 12:07:56.964128  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.964139  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:56.964146  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:56.964232  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:56.989991  620795 cri.go:89] found id: ""
	I1213 12:07:56.990016  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.990025  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:56.990032  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:56.990117  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:57.021908  620795 cri.go:89] found id: ""
	I1213 12:07:57.021934  620795 logs.go:282] 0 containers: []
	W1213 12:07:57.021944  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:57.021952  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:57.022015  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:57.050893  620795 cri.go:89] found id: ""
	I1213 12:07:57.050919  620795 logs.go:282] 0 containers: []
	W1213 12:07:57.050929  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:57.050939  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:57.050958  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:57.114649  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:57.107304   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.107896   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.109344   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.109787   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.111210   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:57.107304   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.107896   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.109344   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.109787   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.111210   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:57.114709  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:57.114743  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:57.142743  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:57.142778  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:57.171088  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:57.171120  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:57.236905  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:57.236948  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:59.754255  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:59.764877  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:59.764948  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:59.800655  620795 cri.go:89] found id: ""
	I1213 12:07:59.800682  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.800691  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:59.800698  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:59.800757  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:59.844261  620795 cri.go:89] found id: ""
	I1213 12:07:59.844289  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.844299  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:59.844305  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:59.844363  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:59.890278  620795 cri.go:89] found id: ""
	I1213 12:07:59.890303  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.890313  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:59.890319  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:59.890379  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:59.918606  620795 cri.go:89] found id: ""
	I1213 12:07:59.918632  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.918641  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:59.918647  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:59.918703  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:59.947895  620795 cri.go:89] found id: ""
	I1213 12:07:59.947918  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.947928  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:59.947934  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:59.947993  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:59.973045  620795 cri.go:89] found id: ""
	I1213 12:07:59.973073  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.973082  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:59.973089  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:59.973163  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:00.009231  620795 cri.go:89] found id: ""
	I1213 12:08:00.009320  620795 logs.go:282] 0 containers: []
	W1213 12:08:00.009353  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:00.009374  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:00.009507  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:00.119476  620795 cri.go:89] found id: ""
	I1213 12:08:00.119618  620795 logs.go:282] 0 containers: []
	W1213 12:08:00.119644  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:00.119687  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:00.119721  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:00.145226  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:00.145450  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:00.282893  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:00.266048   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.266988   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.274032   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.274509   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.276639   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:00.266048   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.266988   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.274032   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.274509   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.276639   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:00.282923  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:00.282944  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:00.371336  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:00.371439  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:00.430461  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:00.430503  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:03.002113  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:03.014603  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:03.014679  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:03.042673  620795 cri.go:89] found id: ""
	I1213 12:08:03.042701  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.042711  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:03.042718  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:03.042778  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:03.074056  620795 cri.go:89] found id: ""
	I1213 12:08:03.074133  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.074164  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:03.074185  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:03.074301  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:03.101450  620795 cri.go:89] found id: ""
	I1213 12:08:03.101485  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.101495  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:03.101502  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:03.101564  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:03.132013  620795 cri.go:89] found id: ""
	I1213 12:08:03.132042  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.132053  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:03.132060  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:03.132123  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:03.158035  620795 cri.go:89] found id: ""
	I1213 12:08:03.158057  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.158067  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:03.158074  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:03.158131  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:03.183772  620795 cri.go:89] found id: ""
	I1213 12:08:03.183800  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.183809  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:03.183816  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:03.183879  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:03.209685  620795 cri.go:89] found id: ""
	I1213 12:08:03.209710  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.209718  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:03.209725  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:03.209809  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:03.238718  620795 cri.go:89] found id: ""
	I1213 12:08:03.238742  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.238751  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:03.238760  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:03.238771  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:03.266176  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:03.266211  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:03.295327  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:03.295357  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:03.371751  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:03.371796  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:03.388535  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:03.388569  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:03.455075  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:03.446801   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.447400   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.448900   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.449492   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.451125   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:03.446801   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.447400   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.448900   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.449492   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.451125   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:05.956468  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:05.967247  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:05.967349  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:05.992470  620795 cri.go:89] found id: ""
	I1213 12:08:05.992495  620795 logs.go:282] 0 containers: []
	W1213 12:08:05.992504  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:05.992510  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:05.992576  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:06.025309  620795 cri.go:89] found id: ""
	I1213 12:08:06.025339  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.025349  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:06.025356  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:06.025417  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:06.056164  620795 cri.go:89] found id: ""
	I1213 12:08:06.056192  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.056202  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:06.056208  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:06.056268  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:06.091020  620795 cri.go:89] found id: ""
	I1213 12:08:06.091047  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.091057  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:06.091063  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:06.091124  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:06.117741  620795 cri.go:89] found id: ""
	I1213 12:08:06.117767  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.117776  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:06.117792  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:06.117850  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:06.143430  620795 cri.go:89] found id: ""
	I1213 12:08:06.143454  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.143465  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:06.143472  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:06.143558  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:06.169857  620795 cri.go:89] found id: ""
	I1213 12:08:06.169883  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.169892  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:06.169899  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:06.169959  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:06.196298  620795 cri.go:89] found id: ""
	I1213 12:08:06.196325  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.196335  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:06.196344  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:06.196385  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:06.212572  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:06.212599  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:06.278450  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:06.270268   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.270834   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.272354   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.273016   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.274527   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:06.270268   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.270834   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.272354   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.273016   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.274527   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:06.278473  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:06.278485  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:06.306640  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:06.306679  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:06.336266  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:06.336295  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:08.901791  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:08.912829  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:08.912897  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:08.942435  620795 cri.go:89] found id: ""
	I1213 12:08:08.942467  620795 logs.go:282] 0 containers: []
	W1213 12:08:08.942476  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:08.942483  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:08.942552  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:08.968397  620795 cri.go:89] found id: ""
	I1213 12:08:08.968475  620795 logs.go:282] 0 containers: []
	W1213 12:08:08.968508  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:08.968533  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:08.968615  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:08.995667  620795 cri.go:89] found id: ""
	I1213 12:08:08.995734  620795 logs.go:282] 0 containers: []
	W1213 12:08:08.995757  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:08.995776  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:08.995851  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:09.026748  620795 cri.go:89] found id: ""
	I1213 12:08:09.026827  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.026859  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:09.026878  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:09.026961  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:09.052881  620795 cri.go:89] found id: ""
	I1213 12:08:09.052910  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.052919  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:09.052926  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:09.053016  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:09.079635  620795 cri.go:89] found id: ""
	I1213 12:08:09.079663  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.079673  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:09.079679  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:09.079740  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:09.106465  620795 cri.go:89] found id: ""
	I1213 12:08:09.106499  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.106507  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:09.106529  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:09.106610  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:09.132296  620795 cri.go:89] found id: ""
	I1213 12:08:09.132373  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.132389  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:09.132400  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:09.132411  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:09.198891  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:09.198937  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:09.215689  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:09.215718  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:09.283376  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:09.275383   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.276074   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.277779   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.278245   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.279888   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:09.275383   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.276074   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.277779   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.278245   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.279888   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:09.283399  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:09.283412  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:09.311953  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:09.311995  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:11.844673  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:11.854957  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:11.855031  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:11.884334  620795 cri.go:89] found id: ""
	I1213 12:08:11.884361  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.884370  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:11.884377  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:11.884438  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:11.911693  620795 cri.go:89] found id: ""
	I1213 12:08:11.911715  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.911724  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:11.911730  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:11.911785  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:11.939653  620795 cri.go:89] found id: ""
	I1213 12:08:11.939679  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.939688  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:11.939694  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:11.939753  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:11.965596  620795 cri.go:89] found id: ""
	I1213 12:08:11.965622  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.965631  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:11.965639  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:11.965695  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:11.994822  620795 cri.go:89] found id: ""
	I1213 12:08:11.994848  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.994857  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:11.994863  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:11.994921  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:12.027085  620795 cri.go:89] found id: ""
	I1213 12:08:12.027111  620795 logs.go:282] 0 containers: []
	W1213 12:08:12.027119  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:12.027127  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:12.027189  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:12.060592  620795 cri.go:89] found id: ""
	I1213 12:08:12.060621  620795 logs.go:282] 0 containers: []
	W1213 12:08:12.060631  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:12.060637  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:12.060695  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:12.087001  620795 cri.go:89] found id: ""
	I1213 12:08:12.087026  620795 logs.go:282] 0 containers: []
	W1213 12:08:12.087035  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:12.087046  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:12.087057  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:12.154968  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:12.155007  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:12.173266  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:12.173296  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:12.238320  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:12.230047   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.230756   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.232467   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.233052   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.234716   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:12.230047   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.230756   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.232467   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.233052   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.234716   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:12.238342  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:12.238353  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:12.266852  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:12.266886  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:14.799502  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:14.811316  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:14.811495  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:14.868310  620795 cri.go:89] found id: ""
	I1213 12:08:14.868404  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.868430  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:14.868485  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:14.868662  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:14.910677  620795 cri.go:89] found id: ""
	I1213 12:08:14.910744  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.910766  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:14.910785  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:14.910872  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:14.939727  620795 cri.go:89] found id: ""
	I1213 12:08:14.939767  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.939777  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:14.939783  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:14.939849  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:14.966035  620795 cri.go:89] found id: ""
	I1213 12:08:14.966069  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.966078  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:14.966086  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:14.966160  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:14.994530  620795 cri.go:89] found id: ""
	I1213 12:08:14.994596  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.994619  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:14.994641  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:14.994727  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:15.032176  620795 cri.go:89] found id: ""
	I1213 12:08:15.032213  620795 logs.go:282] 0 containers: []
	W1213 12:08:15.032223  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:15.032230  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:15.032294  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:15.063866  620795 cri.go:89] found id: ""
	I1213 12:08:15.063900  620795 logs.go:282] 0 containers: []
	W1213 12:08:15.063910  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:15.063916  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:15.063977  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:15.094824  620795 cri.go:89] found id: ""
	I1213 12:08:15.094857  620795 logs.go:282] 0 containers: []
	W1213 12:08:15.094867  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:15.094876  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:15.094888  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:15.123857  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:15.123926  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:15.189408  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:15.189444  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:15.208112  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:15.208143  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:15.272770  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:15.265015   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.265421   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.266883   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.267540   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.269262   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:15.265015   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.265421   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.266883   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.267540   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.269262   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:15.272794  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:15.272806  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:17.802242  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:17.818907  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:17.818976  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:17.860553  620795 cri.go:89] found id: ""
	I1213 12:08:17.860577  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.860586  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:17.860594  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:17.860663  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:17.890844  620795 cri.go:89] found id: ""
	I1213 12:08:17.890868  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.890877  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:17.890883  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:17.890937  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:17.916758  620795 cri.go:89] found id: ""
	I1213 12:08:17.916784  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.916794  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:17.916800  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:17.916860  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:17.946527  620795 cri.go:89] found id: ""
	I1213 12:08:17.946564  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.946573  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:17.946598  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:17.946684  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:17.971981  620795 cri.go:89] found id: ""
	I1213 12:08:17.972004  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.972013  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:17.972020  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:17.972075  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:17.997005  620795 cri.go:89] found id: ""
	I1213 12:08:17.997042  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.997052  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:17.997059  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:17.997126  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:18.029007  620795 cri.go:89] found id: ""
	I1213 12:08:18.029038  620795 logs.go:282] 0 containers: []
	W1213 12:08:18.029054  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:18.029061  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:18.029120  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:18.056596  620795 cri.go:89] found id: ""
	I1213 12:08:18.056625  620795 logs.go:282] 0 containers: []
	W1213 12:08:18.056637  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:18.056647  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:18.056661  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:18.074846  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:18.074874  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:18.144092  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:18.136489   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.137142   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.138620   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.139127   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.140582   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:18.136489   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.137142   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.138620   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.139127   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.140582   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:18.144157  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:18.144176  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:18.173096  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:18.173134  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:18.208914  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:18.208943  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:20.774528  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:20.788572  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:20.788639  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:20.858764  620795 cri.go:89] found id: ""
	I1213 12:08:20.858786  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.858794  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:20.858800  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:20.858857  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:20.887866  620795 cri.go:89] found id: ""
	I1213 12:08:20.887888  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.887897  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:20.887904  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:20.887967  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:20.918367  620795 cri.go:89] found id: ""
	I1213 12:08:20.918438  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.918462  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:20.918481  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:20.918566  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:20.943267  620795 cri.go:89] found id: ""
	I1213 12:08:20.943292  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.943301  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:20.943308  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:20.943362  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:20.972672  620795 cri.go:89] found id: ""
	I1213 12:08:20.972707  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.972716  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:20.972723  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:20.972781  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:20.997368  620795 cri.go:89] found id: ""
	I1213 12:08:20.997394  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.997404  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:20.997411  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:20.997487  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:21.029283  620795 cri.go:89] found id: ""
	I1213 12:08:21.029309  620795 logs.go:282] 0 containers: []
	W1213 12:08:21.029319  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:21.029328  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:21.029382  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:21.054485  620795 cri.go:89] found id: ""
	I1213 12:08:21.054510  620795 logs.go:282] 0 containers: []
	W1213 12:08:21.054520  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:21.054529  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:21.054540  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:21.121036  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:21.121073  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:21.137498  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:21.137526  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:21.201021  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:21.192527   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.193441   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.195064   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.195396   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.196967   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:21.192527   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.193441   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.195064   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.195396   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.196967   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:21.201047  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:21.201060  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:21.233120  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:21.233155  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:23.768528  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:23.784788  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:23.784875  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:23.861902  620795 cri.go:89] found id: ""
	I1213 12:08:23.861933  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.861949  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:23.861956  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:23.862019  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:23.890007  620795 cri.go:89] found id: ""
	I1213 12:08:23.890029  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.890038  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:23.890044  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:23.890104  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:23.915427  620795 cri.go:89] found id: ""
	I1213 12:08:23.915450  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.915459  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:23.915465  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:23.915550  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:23.941041  620795 cri.go:89] found id: ""
	I1213 12:08:23.941069  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.941078  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:23.941085  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:23.941141  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:23.966860  620795 cri.go:89] found id: ""
	I1213 12:08:23.966886  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.966895  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:23.966902  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:23.966958  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:23.992499  620795 cri.go:89] found id: ""
	I1213 12:08:23.992528  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.992537  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:23.992558  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:23.992616  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:24.019996  620795 cri.go:89] found id: ""
	I1213 12:08:24.020030  620795 logs.go:282] 0 containers: []
	W1213 12:08:24.020045  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:24.020052  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:24.020129  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:24.047181  620795 cri.go:89] found id: ""
	I1213 12:08:24.047216  620795 logs.go:282] 0 containers: []
	W1213 12:08:24.047225  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:24.047234  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:24.047245  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:24.110372  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:24.102615   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.103224   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.104739   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.105164   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.106663   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:24.102615   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.103224   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.104739   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.105164   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.106663   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:24.110398  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:24.110412  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:24.139714  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:24.139748  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:24.172397  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:24.172426  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:24.240938  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:24.240975  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:26.757922  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:26.771140  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:26.771256  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:26.808049  620795 cri.go:89] found id: ""
	I1213 12:08:26.808124  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.808149  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:26.808169  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:26.808258  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:26.845750  620795 cri.go:89] found id: ""
	I1213 12:08:26.845826  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.845851  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:26.845870  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:26.845951  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:26.885327  620795 cri.go:89] found id: ""
	I1213 12:08:26.885401  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.885424  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:26.885444  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:26.885533  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:26.912813  620795 cri.go:89] found id: ""
	I1213 12:08:26.912844  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.912853  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:26.912860  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:26.912917  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:26.940224  620795 cri.go:89] found id: ""
	I1213 12:08:26.940301  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.940317  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:26.940325  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:26.940383  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:26.970684  620795 cri.go:89] found id: ""
	I1213 12:08:26.970728  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.970738  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:26.970745  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:26.970825  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:27.001739  620795 cri.go:89] found id: ""
	I1213 12:08:27.001821  620795 logs.go:282] 0 containers: []
	W1213 12:08:27.001846  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:27.001867  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:27.001968  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:27.029502  620795 cri.go:89] found id: ""
	I1213 12:08:27.029525  620795 logs.go:282] 0 containers: []
	W1213 12:08:27.029533  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:27.029542  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:27.029561  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:27.097411  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:27.090200   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.090583   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.092154   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.092579   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.093994   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:27.090200   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.090583   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.092154   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.092579   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.093994   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:27.097433  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:27.097445  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:27.126207  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:27.126242  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:27.152776  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:27.152814  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:27.218430  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:27.218466  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:29.735087  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:29.746276  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:29.746353  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:29.790488  620795 cri.go:89] found id: ""
	I1213 12:08:29.790563  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.790587  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:29.790607  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:29.790694  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:29.863661  620795 cri.go:89] found id: ""
	I1213 12:08:29.863730  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.863747  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:29.863754  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:29.863822  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:29.889696  620795 cri.go:89] found id: ""
	I1213 12:08:29.889723  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.889731  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:29.889738  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:29.889793  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:29.917557  620795 cri.go:89] found id: ""
	I1213 12:08:29.917619  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.917642  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:29.917657  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:29.917732  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:29.941179  620795 cri.go:89] found id: ""
	I1213 12:08:29.941201  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.941210  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:29.941217  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:29.941276  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:29.965683  620795 cri.go:89] found id: ""
	I1213 12:08:29.965758  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.965775  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:29.965783  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:29.965858  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:29.994076  620795 cri.go:89] found id: ""
	I1213 12:08:29.994111  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.994121  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:29.994127  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:29.994189  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:30.034696  620795 cri.go:89] found id: ""
	I1213 12:08:30.034723  620795 logs.go:282] 0 containers: []
	W1213 12:08:30.034733  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:30.034743  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:30.034756  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:30.103277  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:30.103319  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:30.120811  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:30.120901  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:30.194375  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:30.185897   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.186387   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.187817   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.188577   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.190599   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:30.185897   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.186387   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.187817   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.188577   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.190599   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:30.194399  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:30.194412  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:30.225794  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:30.225830  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:32.757391  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:32.768065  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:32.768178  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:32.801083  620795 cri.go:89] found id: ""
	I1213 12:08:32.801105  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.801114  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:32.801123  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:32.801179  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:32.839546  620795 cri.go:89] found id: ""
	I1213 12:08:32.839567  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.839576  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:32.839582  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:32.839637  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:32.888939  620795 cri.go:89] found id: ""
	I1213 12:08:32.889005  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.889029  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:32.889044  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:32.889115  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:32.926624  620795 cri.go:89] found id: ""
	I1213 12:08:32.926651  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.926666  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:32.926676  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:32.926752  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:32.958800  620795 cri.go:89] found id: ""
	I1213 12:08:32.958835  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.958844  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:32.958850  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:32.958916  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:32.989617  620795 cri.go:89] found id: ""
	I1213 12:08:32.989692  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.989708  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:32.989721  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:32.989791  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:33.017551  620795 cri.go:89] found id: ""
	I1213 12:08:33.017623  620795 logs.go:282] 0 containers: []
	W1213 12:08:33.017647  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:33.017659  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:33.017736  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:33.043587  620795 cri.go:89] found id: ""
	I1213 12:08:33.043612  620795 logs.go:282] 0 containers: []
	W1213 12:08:33.043621  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:33.043632  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:33.043644  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:33.114830  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:33.105828   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.106521   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.108296   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.108871   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.110537   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:33.105828   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.106521   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.108296   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.108871   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.110537   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:33.114904  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:33.114923  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:33.144060  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:33.144098  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:33.174527  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:33.174559  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:33.242589  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:33.242622  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:35.760100  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:35.770376  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:35.770444  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:35.803335  620795 cri.go:89] found id: ""
	I1213 12:08:35.803356  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.803365  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:35.803371  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:35.803427  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:35.837892  620795 cri.go:89] found id: ""
	I1213 12:08:35.837916  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.837926  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:35.837933  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:35.837989  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:35.866561  620795 cri.go:89] found id: ""
	I1213 12:08:35.866588  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.866598  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:35.866605  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:35.866667  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:35.892759  620795 cri.go:89] found id: ""
	I1213 12:08:35.892795  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.892804  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:35.892810  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:35.892880  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:35.923215  620795 cri.go:89] found id: ""
	I1213 12:08:35.923238  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.923247  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:35.923252  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:35.923310  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:35.950448  620795 cri.go:89] found id: ""
	I1213 12:08:35.950475  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.950484  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:35.950491  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:35.950546  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:35.976121  620795 cri.go:89] found id: ""
	I1213 12:08:35.976149  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.976158  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:35.976165  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:35.976247  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:36.007726  620795 cri.go:89] found id: ""
	I1213 12:08:36.007754  620795 logs.go:282] 0 containers: []
	W1213 12:08:36.007765  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:36.007774  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:36.007789  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:36.085423  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:36.085465  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:36.104590  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:36.104621  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:36.174734  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:36.166755   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.167389   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.169214   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.169622   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.171073   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:36.166755   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.167389   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.169214   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.169622   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.171073   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:36.174757  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:36.174771  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:36.204232  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:36.204271  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:38.733384  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:38.744052  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:38.744118  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:38.780661  620795 cri.go:89] found id: ""
	I1213 12:08:38.780685  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.780694  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:38.780704  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:38.780764  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:38.822383  620795 cri.go:89] found id: ""
	I1213 12:08:38.822407  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.822416  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:38.822422  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:38.822477  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:38.855498  620795 cri.go:89] found id: ""
	I1213 12:08:38.855544  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.855553  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:38.855565  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:38.855619  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:38.885018  620795 cri.go:89] found id: ""
	I1213 12:08:38.885045  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.885055  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:38.885062  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:38.885119  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:38.910126  620795 cri.go:89] found id: ""
	I1213 12:08:38.910162  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.910172  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:38.910179  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:38.910246  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:38.940467  620795 cri.go:89] found id: ""
	I1213 12:08:38.940502  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.940513  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:38.940520  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:38.940597  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:38.966188  620795 cri.go:89] found id: ""
	I1213 12:08:38.966222  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.966232  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:38.966238  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:38.966303  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:38.995881  620795 cri.go:89] found id: ""
	I1213 12:08:38.995907  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.995917  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:38.995927  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:38.995939  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:39.015887  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:39.015917  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:39.098130  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:39.090344   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.090891   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.092783   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.093197   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.094699   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:39.090344   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.090891   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.092783   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.093197   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.094699   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:39.098150  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:39.098163  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:39.126236  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:39.126269  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:39.153815  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:39.153842  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:41.721729  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:41.732158  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:41.732229  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:41.760995  620795 cri.go:89] found id: ""
	I1213 12:08:41.761017  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.761026  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:41.761033  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:41.761087  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:41.795082  620795 cri.go:89] found id: ""
	I1213 12:08:41.795105  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.795113  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:41.795119  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:41.795184  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:41.825959  620795 cri.go:89] found id: ""
	I1213 12:08:41.826033  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.826056  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:41.826076  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:41.826159  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:41.852118  620795 cri.go:89] found id: ""
	I1213 12:08:41.852183  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.852198  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:41.852205  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:41.852261  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:41.877587  620795 cri.go:89] found id: ""
	I1213 12:08:41.877626  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.877636  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:41.877642  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:41.877706  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:41.906166  620795 cri.go:89] found id: ""
	I1213 12:08:41.906192  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.906202  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:41.906216  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:41.906273  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:41.935663  620795 cri.go:89] found id: ""
	I1213 12:08:41.935688  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.935697  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:41.935704  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:41.935761  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:41.960919  620795 cri.go:89] found id: ""
	I1213 12:08:41.960943  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.960952  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:41.960960  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:41.960971  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:41.989438  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:41.989472  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:42.026694  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:42.026779  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:42.120242  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:42.120297  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:42.141212  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:42.141246  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:42.216949  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:42.207789   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.208642   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.210144   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.210924   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.212786   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:42.207789   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.208642   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.210144   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.210924   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.212786   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:44.717236  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:44.728891  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:44.728977  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:44.753976  620795 cri.go:89] found id: ""
	I1213 12:08:44.754000  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.754008  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:44.754018  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:44.754078  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:44.786705  620795 cri.go:89] found id: ""
	I1213 12:08:44.786732  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.786741  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:44.786748  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:44.786806  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:44.822299  620795 cri.go:89] found id: ""
	I1213 12:08:44.822328  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.822337  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:44.822345  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:44.822401  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:44.856823  620795 cri.go:89] found id: ""
	I1213 12:08:44.856856  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.856867  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:44.856873  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:44.856930  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:44.882589  620795 cri.go:89] found id: ""
	I1213 12:08:44.882614  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.882623  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:44.882630  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:44.882688  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:44.908466  620795 cri.go:89] found id: ""
	I1213 12:08:44.908491  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.908500  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:44.908507  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:44.908588  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:44.937829  620795 cri.go:89] found id: ""
	I1213 12:08:44.937856  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.937865  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:44.937872  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:44.937927  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:44.963281  620795 cri.go:89] found id: ""
	I1213 12:08:44.963305  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.963315  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:44.963324  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:44.963335  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:44.991410  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:44.991446  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:45.037106  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:45.037139  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:45.136316  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:45.136362  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:45.159600  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:45.159635  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:45.275736  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:45.264960   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.265716   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.268688   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.269926   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.271240   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:45.264960   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.265716   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.268688   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.269926   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.271240   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:47.775978  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:47.794424  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:47.794535  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:47.822730  620795 cri.go:89] found id: ""
	I1213 12:08:47.822773  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.822782  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:47.822794  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:47.822874  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:47.855882  620795 cri.go:89] found id: ""
	I1213 12:08:47.855909  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.855921  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:47.855928  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:47.855992  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:47.880824  620795 cri.go:89] found id: ""
	I1213 12:08:47.880849  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.880863  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:47.880870  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:47.880944  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:47.905536  620795 cri.go:89] found id: ""
	I1213 12:08:47.905558  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.905567  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:47.905573  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:47.905627  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:47.930629  620795 cri.go:89] found id: ""
	I1213 12:08:47.930651  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.930660  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:47.930666  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:47.930722  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:47.963310  620795 cri.go:89] found id: ""
	I1213 12:08:47.963340  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.963348  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:47.963355  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:47.963416  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:47.988259  620795 cri.go:89] found id: ""
	I1213 12:08:47.988284  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.988293  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:47.988300  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:47.988363  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:48.016297  620795 cri.go:89] found id: ""
	I1213 12:08:48.016324  620795 logs.go:282] 0 containers: []
	W1213 12:08:48.016334  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:48.016344  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:48.016358  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:48.036992  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:48.037157  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:48.110165  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:48.102261   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.102875   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.104540   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.105094   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.106601   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:48.102261   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.102875   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.104540   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.105094   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.106601   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:48.110186  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:48.110199  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:48.138855  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:48.138892  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:48.167128  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:48.167162  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:50.735817  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:50.746548  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:50.746616  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:50.775549  620795 cri.go:89] found id: ""
	I1213 12:08:50.775575  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.775585  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:50.775591  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:50.775646  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:50.804612  620795 cri.go:89] found id: ""
	I1213 12:08:50.804635  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.804644  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:50.804650  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:50.804705  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:50.837625  620795 cri.go:89] found id: ""
	I1213 12:08:50.837650  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.837659  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:50.837665  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:50.837720  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:50.864589  620795 cri.go:89] found id: ""
	I1213 12:08:50.864612  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.864620  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:50.864627  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:50.864687  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:50.889551  620795 cri.go:89] found id: ""
	I1213 12:08:50.889575  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.889583  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:50.889589  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:50.889646  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:50.919224  620795 cri.go:89] found id: ""
	I1213 12:08:50.919247  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.919255  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:50.919261  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:50.919317  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:50.944422  620795 cri.go:89] found id: ""
	I1213 12:08:50.944495  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.944574  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:50.944612  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:50.944696  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:50.970021  620795 cri.go:89] found id: ""
	I1213 12:08:50.970086  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.970109  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:50.970132  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:50.970163  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:50.986872  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:50.986906  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:51.060506  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:51.052011   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.052816   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.054613   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.055181   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.056812   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:51.052011   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.052816   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.054613   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.055181   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.056812   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:51.060540  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:51.060552  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:51.092480  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:51.092521  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:51.123102  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:51.123131  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:53.694152  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:53.705704  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:53.705773  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:53.731245  620795 cri.go:89] found id: ""
	I1213 12:08:53.731268  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.731276  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:53.731282  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:53.731340  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:53.757925  620795 cri.go:89] found id: ""
	I1213 12:08:53.757957  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.757966  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:53.757973  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:53.758036  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:53.808536  620795 cri.go:89] found id: ""
	I1213 12:08:53.808559  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.808568  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:53.808575  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:53.808635  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:53.840078  620795 cri.go:89] found id: ""
	I1213 12:08:53.840112  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.840122  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:53.840129  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:53.840189  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:53.865894  620795 cri.go:89] found id: ""
	I1213 12:08:53.865917  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.865927  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:53.865933  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:53.865993  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:53.891498  620795 cri.go:89] found id: ""
	I1213 12:08:53.891542  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.891551  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:53.891558  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:53.891621  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:53.917936  620795 cri.go:89] found id: ""
	I1213 12:08:53.917959  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.917968  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:53.917974  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:53.918032  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:53.943098  620795 cri.go:89] found id: ""
	I1213 12:08:53.943169  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.943193  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:53.943215  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:53.943252  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:53.971597  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:53.971637  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:54.002508  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:54.002540  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:54.080813  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:54.080899  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:54.109629  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:54.109659  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:54.177694  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:54.170109   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.170817   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.172367   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.172694   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.174239   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:54.170109   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.170817   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.172367   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.172694   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.174239   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:56.677966  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:56.688667  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:56.688741  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:56.713668  620795 cri.go:89] found id: ""
	I1213 12:08:56.713690  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.713699  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:56.713706  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:56.713762  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:56.741202  620795 cri.go:89] found id: ""
	I1213 12:08:56.741227  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.741236  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:56.741242  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:56.741339  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:56.768922  620795 cri.go:89] found id: ""
	I1213 12:08:56.768942  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.768950  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:56.768957  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:56.769013  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:56.797125  620795 cri.go:89] found id: ""
	I1213 12:08:56.797148  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.797157  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:56.797164  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:56.797218  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:56.824672  620795 cri.go:89] found id: ""
	I1213 12:08:56.824695  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.824703  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:56.824709  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:56.824763  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:56.849420  620795 cri.go:89] found id: ""
	I1213 12:08:56.849446  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.849455  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:56.849462  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:56.849516  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:56.875118  620795 cri.go:89] found id: ""
	I1213 12:08:56.875143  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.875152  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:56.875158  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:56.875213  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:56.900386  620795 cri.go:89] found id: ""
	I1213 12:08:56.900411  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.900420  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:56.900434  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:56.900446  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:56.966130  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:56.966167  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:56.982745  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:56.982773  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:57.073125  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:57.063683   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.064467   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.066129   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.066624   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.068003   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:57.063683   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.064467   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.066129   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.066624   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.068003   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:57.073146  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:57.073165  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:57.104552  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:57.104585  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:59.636110  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:59.649509  620795 out.go:203] 
	W1213 12:08:59.652376  620795 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 12:08:59.652409  620795 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 12:08:59.652418  620795 out.go:285] * Related issues:
	* Related issues:
	W1213 12:08:59.652431  620795 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1213 12:08:59.652444  620795 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1213 12:08:59.655226  620795 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p newest-cni-800979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 105
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-800979
helpers_test.go:244: (dbg) docker inspect newest-cni-800979:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef",
	        "Created": "2025-12-13T11:52:51.619651061Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 620923,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T12:02:49.509239436Z",
	            "FinishedAt": "2025-12-13T12:02:48.165379431Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef/hosts",
	        "LogPath": "/var/lib/docker/containers/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef-json.log",
	        "Name": "/newest-cni-800979",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-800979:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-800979",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef",
	                "LowerDir": "/var/lib/docker/overlay2/c7d2cc87bdf8f5a9a60e544f17bca9528f6384a57e9d470177b306242d8113d5-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7d2cc87bdf8f5a9a60e544f17bca9528f6384a57e9d470177b306242d8113d5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7d2cc87bdf8f5a9a60e544f17bca9528f6384a57e9d470177b306242d8113d5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7d2cc87bdf8f5a9a60e544f17bca9528f6384a57e9d470177b306242d8113d5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-800979",
	                "Source": "/var/lib/docker/volumes/newest-cni-800979/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-800979",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-800979",
	                "name.minikube.sigs.k8s.io": "newest-cni-800979",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "24ac9a215b72ee124284f478ff764304afc09b82226a2739c7b5f0f9a84a05cd",
	            "SandboxKey": "/var/run/docker/netns/24ac9a215b72",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-800979": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:2e:cf:d5:d1:e9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de59fc08c8081b0c37df8bacf82db2ccccb307596588e9c22d7d094938935e3c",
	                    "EndpointID": "4aeedc678fe23c218965caf6e08605f8464cbaa26208ec7a8c460ea48b3e8143",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-800979",
	                        "4aef671a766b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-800979 -n newest-cni-800979
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-800979 -n newest-cni-800979: exit status 2 (367.216656ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-800979 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-800979 logs -n 25: (1.794276238s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-151605 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable metrics-server -p embed-certs-326948 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │                     │
	│ stop    │ -p embed-certs-326948 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-326948 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:52 UTC │
	│ image   │ default-k8s-diff-port-151605 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p default-k8s-diff-port-151605 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p disable-driver-mounts-072590                                                                                                                                                                                                                      │ disable-driver-mounts-072590 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ image   │ embed-certs-326948 image list --format=json                                                                                                                                                                                                          │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p embed-certs-326948 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p embed-certs-326948                                                                                                                                                                                                                                │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p embed-certs-326948                                                                                                                                                                                                                                │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p newest-cni-800979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-307409 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:00 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-800979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │                     │
	│ stop    │ -p newest-cni-800979 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:02 UTC │ 13 Dec 25 12:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-800979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:02 UTC │ 13 Dec 25 12:02 UTC │
	│ start   │ -p newest-cni-800979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:02 UTC │                     │
	│ stop    │ -p no-preload-307409 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:03 UTC │ 13 Dec 25 12:03 UTC │
	│ addons  │ enable dashboard -p no-preload-307409 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:03 UTC │ 13 Dec 25 12:03 UTC │
	│ start   │ -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:03 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 12:03:03
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 12:03:03.050063  622913 out.go:360] Setting OutFile to fd 1 ...
	I1213 12:03:03.050285  622913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:03:03.050312  622913 out.go:374] Setting ErrFile to fd 2...
	I1213 12:03:03.050330  622913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:03:03.050625  622913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 12:03:03.051085  622913 out.go:368] Setting JSON to false
	I1213 12:03:03.052120  622913 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13535,"bootTime":1765613848,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 12:03:03.052229  622913 start.go:143] virtualization:  
	I1213 12:03:03.055383  622913 out.go:179] * [no-preload-307409] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 12:03:03.059239  622913 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 12:03:03.059332  622913 notify.go:221] Checking for updates...
	I1213 12:03:03.064728  622913 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 12:03:03.067859  622913 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:03:03.070706  622913 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 12:03:03.073576  622913 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 12:03:03.076392  622913 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 12:03:03.079655  622913 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:03:03.080246  622913 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 12:03:03.113231  622913 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 12:03:03.113356  622913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:03:03.174414  622913 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 12:03:03.164880125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:03:03.174536  622913 docker.go:319] overlay module found
	I1213 12:03:03.177638  622913 out.go:179] * Using the docker driver based on existing profile
	I1213 12:03:03.180320  622913 start.go:309] selected driver: docker
	I1213 12:03:03.180343  622913 start.go:927] validating driver "docker" against &{Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:03:03.180449  622913 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 12:03:03.181174  622913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:03:03.236517  622913 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 12:03:03.227319129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:03:03.236860  622913 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 12:03:03.236895  622913 cni.go:84] Creating CNI manager for ""
	I1213 12:03:03.236967  622913 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 12:03:03.237012  622913 start.go:353] cluster config:
	{Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:03:03.241932  622913 out.go:179] * Starting "no-preload-307409" primary control-plane node in "no-preload-307409" cluster
	I1213 12:03:03.244777  622913 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 12:03:03.247722  622913 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 12:03:03.250567  622913 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 12:03:03.250698  622913 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 12:03:03.250725  622913 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/config.json ...
	I1213 12:03:03.251056  622913 cache.go:107] acquiring lock: {Name:mkf4d74369c8245ecb55fb0e29b8225ca9f09ff5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251142  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 12:03:03.251161  622913 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 117.655µs
	I1213 12:03:03.251175  622913 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 12:03:03.251192  622913 cache.go:107] acquiring lock: {Name:mkb6b336872403a4d868a5d769900fdf1066c1c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251240  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1213 12:03:03.251249  622913 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 59.291µs
	I1213 12:03:03.251256  622913 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 12:03:03.251279  622913 cache.go:107] acquiring lock: {Name:mkafdfd911f389f1e02c51849a66241927a5c213 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251318  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1213 12:03:03.251329  622913 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 50.749µs
	I1213 12:03:03.251341  622913 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 12:03:03.251360  622913 cache.go:107] acquiring lock: {Name:mk8f79409d2ca53ad062fcf0126f6980a6193bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251395  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1213 12:03:03.251406  622913 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 49.043µs
	I1213 12:03:03.251413  622913 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 12:03:03.251422  622913 cache.go:107] acquiring lock: {Name:mk2037397f0606151b65f1037a4650bdb91f57be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251455  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1213 12:03:03.251465  622913 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 43.717µs
	I1213 12:03:03.251472  622913 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1213 12:03:03.251481  622913 cache.go:107] acquiring lock: {Name:mkcce925699bd9689e329c60f570e109b24fe773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251564  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1213 12:03:03.251578  622913 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 97.437µs
	I1213 12:03:03.251585  622913 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1213 12:03:03.251596  622913 cache.go:107] acquiring lock: {Name:mk7409e8a480c483310652cd8f23d5f9940a03a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251632  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1213 12:03:03.251642  622913 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 47.82µs
	I1213 12:03:03.251649  622913 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1213 12:03:03.251673  622913 cache.go:107] acquiring lock: {Name:mk4ff965cf9ab0943f63cb9d5079b89d443629ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251707  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1213 12:03:03.251716  622913 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 48.632µs
	I1213 12:03:03.251723  622913 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1213 12:03:03.251729  622913 cache.go:87] Successfully saved all images to host disk.
	I1213 12:03:03.282338  622913 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 12:03:03.282369  622913 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 12:03:03.282443  622913 cache.go:243] Successfully downloaded all kic artifacts
	I1213 12:03:03.282477  622913 start.go:360] acquireMachinesLock for no-preload-307409: {Name:mk5b591d9d6f446a65ecf56605831e84fbfd4c88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.282544  622913 start.go:364] duration metric: took 41.937µs to acquireMachinesLock for "no-preload-307409"
	I1213 12:03:03.282565  622913 start.go:96] Skipping create...Using existing machine configuration
	I1213 12:03:03.282570  622913 fix.go:54] fixHost starting: 
	I1213 12:03:03.282851  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:03.304419  622913 fix.go:112] recreateIfNeeded on no-preload-307409: state=Stopped err=<nil>
	W1213 12:03:03.304448  622913 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 12:02:59.273796  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:02:59.310724  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:02:59.374429  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.374460  620795 retry.go:31] will retry after 1.123869523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.660188  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:02:59.746796  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.746834  620795 retry.go:31] will retry after 827.424249ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.773951  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:02:59.886643  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:02:59.984018  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.984054  620795 retry.go:31] will retry after 1.031600228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:00.289311  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:00.498512  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:00.574703  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:00.609412  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:00.609443  620795 retry.go:31] will retry after 1.594897337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:00.654022  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:00.654055  620795 retry.go:31] will retry after 1.847551508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:00.773391  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:01.016343  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:01.149191  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:01.149241  620795 retry.go:31] will retry after 1.156400239s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:01.273296  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:01.773106  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:02.204552  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:02.273738  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:02.274099  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.274136  620795 retry.go:31] will retry after 1.092655081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.305854  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:02.368964  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.369001  620795 retry.go:31] will retry after 1.680740365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.502311  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:02.587589  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.587627  620795 retry.go:31] will retry after 1.930642019s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.773890  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:03.281133  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:03.367295  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:03.462797  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:03.462834  620795 retry.go:31] will retry after 1.480584037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:03.773095  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:04.050289  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:04.211663  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:04.211692  620795 retry.go:31] will retry after 4.628682765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:03.307872  622913 out.go:252] * Restarting existing docker container for "no-preload-307409" ...
	I1213 12:03:03.307964  622913 cli_runner.go:164] Run: docker start no-preload-307409
	I1213 12:03:03.599368  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:03.618935  622913 kic.go:430] container "no-preload-307409" state is running.
	I1213 12:03:03.619319  622913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 12:03:03.641333  622913 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/config.json ...
	I1213 12:03:03.641563  622913 machine.go:94] provisionDockerMachine start ...
	I1213 12:03:03.641633  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:03.663338  622913 main.go:143] libmachine: Using SSH client type: native
	I1213 12:03:03.663870  622913 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1213 12:03:03.663890  622913 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 12:03:03.664580  622913 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 12:03:06.819092  622913 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-307409
	
	I1213 12:03:06.819117  622913 ubuntu.go:182] provisioning hostname "no-preload-307409"
	I1213 12:03:06.819201  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:06.837856  622913 main.go:143] libmachine: Using SSH client type: native
	I1213 12:03:06.838181  622913 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1213 12:03:06.838198  622913 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-307409 && echo "no-preload-307409" | sudo tee /etc/hostname
	I1213 12:03:06.997122  622913 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-307409
	
	I1213 12:03:06.997203  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:07.016669  622913 main.go:143] libmachine: Using SSH client type: native
	I1213 12:03:07.017014  622913 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1213 12:03:07.017037  622913 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-307409' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-307409/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-307409' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 12:03:07.176125  622913 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 12:03:07.176151  622913 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 12:03:07.176182  622913 ubuntu.go:190] setting up certificates
	I1213 12:03:07.176201  622913 provision.go:84] configureAuth start
	I1213 12:03:07.176265  622913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 12:03:07.193873  622913 provision.go:143] copyHostCerts
	I1213 12:03:07.193961  622913 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 12:03:07.193973  622913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 12:03:07.194049  622913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 12:03:07.194164  622913 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 12:03:07.194175  622913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 12:03:07.194205  622913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 12:03:07.194267  622913 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 12:03:07.194275  622913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 12:03:07.194298  622913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 12:03:07.194346  622913 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.no-preload-307409 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-307409]
	I1213 12:03:07.397856  622913 provision.go:177] copyRemoteCerts
	I1213 12:03:07.397930  622913 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 12:03:07.397969  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:07.415003  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:07.523762  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 12:03:07.541934  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 12:03:07.560353  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 12:03:07.577524  622913 provision.go:87] duration metric: took 401.305633ms to configureAuth
	I1213 12:03:07.577567  622913 ubuntu.go:206] setting minikube options for container-runtime
	I1213 12:03:07.577753  622913 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:03:07.577860  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:07.595178  622913 main.go:143] libmachine: Using SSH client type: native
	I1213 12:03:07.595492  622913 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1213 12:03:07.595506  622913 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 12:03:07.957883  622913 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 12:03:07.957909  622913 machine.go:97] duration metric: took 4.316335928s to provisionDockerMachine
	I1213 12:03:07.957921  622913 start.go:293] postStartSetup for "no-preload-307409" (driver="docker")
	I1213 12:03:07.957933  622913 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 12:03:07.958002  622913 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 12:03:07.958068  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:07.976949  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:04.273235  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:04.518978  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:04.583937  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:04.583972  620795 retry.go:31] will retry after 4.359648713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:04.773380  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:04.944170  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:05.011259  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:05.011298  620795 retry.go:31] will retry after 2.730254551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:05.273717  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:05.773164  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:06.274023  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:06.773331  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:07.273766  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:07.742621  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:07.773999  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:07.885064  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:07.885095  620795 retry.go:31] will retry after 5.399825259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:08.273766  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:08.773645  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:08.841141  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:08.935930  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:08.935967  620795 retry.go:31] will retry after 8.567303782s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:08.944298  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:09.032112  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:09.032154  620795 retry.go:31] will retry after 7.715566724s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:08.088342  622913 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 12:03:08.091929  622913 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 12:03:08.092010  622913 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 12:03:08.092029  622913 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 12:03:08.092100  622913 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 12:03:08.092225  622913 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 12:03:08.092336  622913 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 12:03:08.100328  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 12:03:08.119806  622913 start.go:296] duration metric: took 161.868607ms for postStartSetup
	I1213 12:03:08.119893  622913 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 12:03:08.119935  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:08.137272  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:08.240715  622913 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 12:03:08.245595  622913 fix.go:56] duration metric: took 4.963017027s for fixHost
	I1213 12:03:08.245624  622913 start.go:83] releasing machines lock for "no-preload-307409", held for 4.963070517s
	I1213 12:03:08.245713  622913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 12:03:08.262782  622913 ssh_runner.go:195] Run: cat /version.json
	I1213 12:03:08.262844  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:08.263126  622913 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 12:03:08.263189  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:08.283140  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:08.296409  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:08.391353  622913 ssh_runner.go:195] Run: systemctl --version
	I1213 12:03:08.484408  622913 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 12:03:08.531460  622913 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 12:03:08.537034  622913 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 12:03:08.537102  622913 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 12:03:08.548165  622913 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 12:03:08.548229  622913 start.go:496] detecting cgroup driver to use...
	I1213 12:03:08.548280  622913 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 12:03:08.548375  622913 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 12:03:08.564936  622913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 12:03:08.579568  622913 docker.go:218] disabling cri-docker service (if available) ...
	I1213 12:03:08.579670  622913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 12:03:08.596861  622913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 12:03:08.610443  622913 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 12:03:08.718052  622913 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 12:03:08.841997  622913 docker.go:234] disabling docker service ...
	I1213 12:03:08.842083  622913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 12:03:08.857246  622913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 12:03:08.871656  622913 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 12:03:09.021847  622913 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 12:03:09.148277  622913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 12:03:09.162720  622913 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 12:03:09.178582  622913 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 12:03:09.178712  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.188481  622913 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 12:03:09.188600  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.198182  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.207488  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.217314  622913 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 12:03:09.225728  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.234602  622913 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.243163  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.251840  622913 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 12:03:09.261376  622913 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 12:03:09.269241  622913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:03:09.408118  622913 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 12:03:09.582010  622913 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 12:03:09.582116  622913 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 12:03:09.586129  622913 start.go:564] Will wait 60s for crictl version
	I1213 12:03:09.586218  622913 ssh_runner.go:195] Run: which crictl
	I1213 12:03:09.589880  622913 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 12:03:09.617198  622913 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 12:03:09.617307  622913 ssh_runner.go:195] Run: crio --version
	I1213 12:03:09.648039  622913 ssh_runner.go:195] Run: crio --version
	I1213 12:03:09.680132  622913 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 12:03:09.683104  622913 cli_runner.go:164] Run: docker network inspect no-preload-307409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 12:03:09.699119  622913 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 12:03:09.703132  622913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:03:09.712888  622913 kubeadm.go:884] updating cluster {Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 12:03:09.713027  622913 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 12:03:09.713074  622913 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 12:03:09.749883  622913 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 12:03:09.749906  622913 cache_images.go:86] Images are preloaded, skipping loading
	I1213 12:03:09.749914  622913 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 12:03:09.750028  622913 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-307409 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 12:03:09.750104  622913 ssh_runner.go:195] Run: crio config
	I1213 12:03:09.812957  622913 cni.go:84] Creating CNI manager for ""
	I1213 12:03:09.812981  622913 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 12:03:09.813006  622913 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 12:03:09.813030  622913 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-307409 NodeName:no-preload-307409 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 12:03:09.813160  622913 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-307409"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 12:03:09.813240  622913 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 12:03:09.821482  622913 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 12:03:09.821552  622913 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 12:03:09.830108  622913 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 12:03:09.842772  622913 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 12:03:09.855539  622913 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 12:03:09.868438  622913 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 12:03:09.871940  622913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:03:09.881527  622913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:03:09.994807  622913 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:03:10.018299  622913 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409 for IP: 192.168.85.2
	I1213 12:03:10.018324  622913 certs.go:195] generating shared ca certs ...
	I1213 12:03:10.018341  622913 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:03:10.018485  622913 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 12:03:10.018546  622913 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 12:03:10.018560  622913 certs.go:257] generating profile certs ...
	I1213 12:03:10.018675  622913 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key
	I1213 12:03:10.018739  622913 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b
	I1213 12:03:10.018788  622913 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key
	I1213 12:03:10.018902  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 12:03:10.018945  622913 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 12:03:10.018958  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 12:03:10.018984  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 12:03:10.019011  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 12:03:10.019049  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 12:03:10.019107  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 12:03:10.019800  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 12:03:10.070011  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 12:03:10.106991  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 12:03:10.124508  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 12:03:10.141854  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 12:03:10.159596  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 12:03:10.177143  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 12:03:10.193680  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 12:03:10.212540  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 12:03:10.230850  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 12:03:10.247982  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 12:03:10.265265  622913 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 12:03:10.280828  622913 ssh_runner.go:195] Run: openssl version
	I1213 12:03:10.287915  622913 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:03:10.295295  622913 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 12:03:10.302777  622913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:03:10.306712  622913 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:03:10.306788  622913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:03:10.347657  622913 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 12:03:10.355488  622913 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 12:03:10.362741  622913 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 12:03:10.370213  622913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 12:03:10.373963  622913 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 12:03:10.374024  622913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 12:03:10.415846  622913 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 12:03:10.423114  622913 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 12:03:10.430238  622913 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 12:03:10.437700  622913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 12:03:10.441526  622913 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 12:03:10.441626  622913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 12:03:10.482660  622913 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 12:03:10.490193  622913 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 12:03:10.493922  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 12:03:10.537559  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 12:03:10.580339  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 12:03:10.624474  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 12:03:10.668005  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 12:03:10.719243  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 12:03:10.787031  622913 kubeadm.go:401] StartCluster: {Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:03:10.787127  622913 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 12:03:10.787194  622913 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 12:03:10.866441  622913 cri.go:89] found id: ""
	I1213 12:03:10.866517  622913 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 12:03:10.878947  622913 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 12:03:10.878971  622913 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 12:03:10.879029  622913 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 12:03:10.887787  622913 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 12:03:10.888361  622913 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-307409" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:03:10.888611  622913 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-354468/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-307409" cluster setting kubeconfig missing "no-preload-307409" context setting]
	I1213 12:03:10.889058  622913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:03:10.890426  622913 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 12:03:10.898823  622913 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 12:03:10.898859  622913 kubeadm.go:602] duration metric: took 19.881679ms to restartPrimaryControlPlane
	I1213 12:03:10.898869  622913 kubeadm.go:403] duration metric: took 111.848044ms to StartCluster
	I1213 12:03:10.898903  622913 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:03:10.899000  622913 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:03:10.900707  622913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:03:10.900965  622913 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 12:03:10.901208  622913 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:03:10.901250  622913 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 12:03:10.901316  622913 addons.go:70] Setting storage-provisioner=true in profile "no-preload-307409"
	I1213 12:03:10.901329  622913 addons.go:239] Setting addon storage-provisioner=true in "no-preload-307409"
	I1213 12:03:10.901354  622913 host.go:66] Checking if "no-preload-307409" exists ...
	I1213 12:03:10.901796  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:10.902330  622913 addons.go:70] Setting dashboard=true in profile "no-preload-307409"
	I1213 12:03:10.902349  622913 addons.go:239] Setting addon dashboard=true in "no-preload-307409"
	W1213 12:03:10.902356  622913 addons.go:248] addon dashboard should already be in state true
	I1213 12:03:10.902383  622913 host.go:66] Checking if "no-preload-307409" exists ...
	I1213 12:03:10.902788  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:10.906749  622913 addons.go:70] Setting default-storageclass=true in profile "no-preload-307409"
	I1213 12:03:10.907002  622913 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-307409"
	I1213 12:03:10.907925  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:10.908085  622913 out.go:179] * Verifying Kubernetes components...
	I1213 12:03:10.911613  622913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:03:10.936135  622913 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 12:03:10.936200  622913 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 12:03:10.939926  622913 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 12:03:10.940040  622913 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:10.940057  622913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 12:03:10.940121  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:10.942800  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 12:03:10.942825  622913 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 12:03:10.942890  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:10.947265  622913 addons.go:239] Setting addon default-storageclass=true in "no-preload-307409"
	I1213 12:03:10.947306  622913 host.go:66] Checking if "no-preload-307409" exists ...
	I1213 12:03:10.947819  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:11.005750  622913 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 12:03:11.005772  622913 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 12:03:11.005782  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:11.005838  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:11.023641  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:11.041145  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:11.111003  622913 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:03:11.173593  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:11.173636  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 12:03:11.173654  622913 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 12:03:11.188163  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 12:03:11.188185  622913 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 12:03:11.213443  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 12:03:11.213508  622913 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 12:03:11.227236  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 12:03:11.230811  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 12:03:11.230883  622913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 12:03:11.251133  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 12:03:11.251205  622913 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 12:03:11.292200  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 12:03:11.292226  622913 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 12:03:11.305259  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 12:03:11.305283  622913 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 12:03:11.318210  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 12:03:11.318236  622913 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 12:03:11.331855  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:03:11.331882  622913 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 12:03:11.346399  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:11.535442  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:11.535581  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.535629  622913 retry.go:31] will retry after 290.823808ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.535633  622913 retry.go:31] will retry after 252.781045ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.535694  622913 node_ready.go:35] waiting up to 6m0s for node "no-preload-307409" to be "Ready" ...
	W1213 12:03:11.536032  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.536057  622913 retry.go:31] will retry after 294.061208ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.788663  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:11.827131  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 12:03:11.830443  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:11.858572  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.858608  622913 retry.go:31] will retry after 534.111043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:11.903268  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.903302  622913 retry.go:31] will retry after 517.641227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:11.928403  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.928440  622913 retry.go:31] will retry after 261.246628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.190196  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:12.253861  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.253905  622913 retry.go:31] will retry after 750.097801ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.392854  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:12.421390  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:12.466046  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.466119  622913 retry.go:31] will retry after 345.117349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:12.494512  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.494543  622913 retry.go:31] will retry after 582.433152ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.811477  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:12.872208  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.872254  622913 retry.go:31] will retry after 1.066115266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.004542  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:03:09.273871  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:09.773704  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:10.273974  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:10.773144  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:11.273093  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:11.773168  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:12.273119  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:12.773938  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:13.274064  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:13.285062  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:13.346306  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.346338  620795 retry.go:31] will retry after 9.878335415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.773923  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:13.077848  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:13.142906  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.142942  622913 retry.go:31] will retry after 477.26404ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:13.177073  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.177107  622913 retry.go:31] will retry after 558.594273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:13.536929  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:13.621309  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:13.684925  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.684962  622913 retry.go:31] will retry after 887.0827ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.735891  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:13.838454  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.838488  622913 retry.go:31] will retry after 1.840863262s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.938866  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:13.997740  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.997780  622913 retry.go:31] will retry after 1.50758238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:14.572279  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:14.649792  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:14.649830  622913 retry.go:31] will retry after 2.273525411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:15.505555  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:15.537094  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:15.566161  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:15.566200  622913 retry.go:31] will retry after 1.268984334s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:15.680410  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:15.739773  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:15.739804  622913 retry.go:31] will retry after 2.516127735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.835378  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:16.919361  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.919396  622913 retry.go:31] will retry after 2.060639493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.923603  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:16.987685  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.987717  622913 retry.go:31] will retry after 3.014723999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:18.037172  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:14.273845  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:14.773934  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:15.273954  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:15.774017  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:16.273243  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:16.748013  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:03:16.773600  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:16.899498  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.899555  620795 retry.go:31] will retry after 7.173965376s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:17.273146  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:17.504219  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:17.614341  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:17.614369  620795 retry.go:31] will retry after 8.805046452s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:17.773767  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:18.273931  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:18.773442  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:18.256769  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:18.385179  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:18.385215  622913 retry.go:31] will retry after 1.545787463s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:18.980290  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:19.083283  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:19.083326  622913 retry.go:31] will retry after 3.363160165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:19.931900  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:19.994541  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:19.994572  622913 retry.go:31] will retry after 3.448577935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:20.003109  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:20.075345  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:20.075383  622913 retry.go:31] will retry after 2.247696448s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:20.536209  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:22.323733  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:22.390042  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:22.390078  622913 retry.go:31] will retry after 4.701837343s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:22.447431  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:22.510069  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:22.510101  622913 retry.go:31] will retry after 8.996063036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:22.536655  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:19.273647  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:19.773235  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:20.273783  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:20.774109  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:21.273100  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:21.774041  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:22.273187  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:22.773919  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:23.224947  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:23.273354  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:23.287102  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:23.287132  620795 retry.go:31] will retry after 17.975754277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:23.774029  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:24.073794  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:24.135298  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:24.135337  620795 retry.go:31] will retry after 17.719019377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:23.443398  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:23.501606  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:23.501640  622913 retry.go:31] will retry after 3.90534406s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:24.537114  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:27.036285  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:27.092481  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:27.162031  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:27.162065  622913 retry.go:31] will retry after 11.355394108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:27.407221  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:27.478522  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:27.478557  622913 retry.go:31] will retry after 8.009668822s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:24.273481  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:24.773666  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:25.273142  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:25.773170  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:26.273652  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:26.420263  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:26.478183  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:26.478224  620795 retry.go:31] will retry after 20.903659468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:26.773685  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:27.273113  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:27.773126  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:28.273297  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:28.773524  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:29.537044  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:31.506350  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:31.537137  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:31.567063  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:31.567101  622913 retry.go:31] will retry after 5.348365924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:29.273854  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:29.773973  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:30.273040  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:30.773142  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:31.273258  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:31.773723  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:32.274053  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:32.774024  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:33.273125  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:33.773200  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:33.537277  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:35.488997  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:35.615701  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:35.615734  622913 retry.go:31] will retry after 18.593547057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:36.036633  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:36.916463  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:36.985838  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:36.985870  622913 retry.go:31] will retry after 7.879856322s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:34.273224  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:34.773126  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:35.273423  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:35.773837  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:36.273251  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:36.773088  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:37.273142  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:37.773099  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:38.273954  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:38.773678  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:38.518385  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:38.536542  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:38.629558  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:38.629596  622913 retry.go:31] will retry after 11.083764817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:40.537112  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:43.037066  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:39.273565  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:39.773916  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:40.274028  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:40.773120  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:41.263107  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:41.273658  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:41.328103  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:41.328152  620795 retry.go:31] will retry after 24.557962123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:41.773949  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:41.855229  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:41.913722  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:41.913758  620795 retry.go:31] will retry after 29.657634591s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:42.273168  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:42.773137  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:43.273064  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:43.773040  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:44.866836  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:44.926788  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:44.926822  622913 retry.go:31] will retry after 12.537177434s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:45.536544  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:47.537056  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:44.273531  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:44.773694  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:45.273864  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:45.773153  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:46.273336  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:46.773222  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:47.273977  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:47.382145  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:47.444684  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:47.444761  620795 retry.go:31] will retry after 14.939941469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:47.773125  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:48.273113  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:48.773715  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:49.714461  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:49.810126  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:49.810163  622913 retry.go:31] will retry after 17.034686012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:50.037110  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:52.537099  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:49.274132  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:49.773105  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:50.273278  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:50.773375  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:51.273108  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:51.773957  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:52.273086  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:52.773220  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:53.273134  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:53.773528  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:54.210466  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:54.276658  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:54.276693  622913 retry.go:31] will retry after 15.477790737s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:55.037124  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:57.464704  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:57.536423  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:57.546896  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:57.546941  622913 retry.go:31] will retry after 45.136010492s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:54.273748  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:54.773661  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:55.273945  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:55.773185  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:56.273156  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:56.773921  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:57.273352  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:03:57.273425  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:03:57.360759  620795 cri.go:89] found id: ""
	I1213 12:03:57.360784  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.360793  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:03:57.360799  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:03:57.360899  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:03:57.386673  620795 cri.go:89] found id: ""
	I1213 12:03:57.386699  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.386709  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:03:57.386715  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:03:57.386772  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:03:57.412179  620795 cri.go:89] found id: ""
	I1213 12:03:57.412202  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.412211  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:03:57.412217  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:03:57.412275  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:03:57.440758  620795 cri.go:89] found id: ""
	I1213 12:03:57.440782  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.440791  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:03:57.440797  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:03:57.440863  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:03:57.474164  620795 cri.go:89] found id: ""
	I1213 12:03:57.474189  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.474198  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:03:57.474205  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:03:57.474266  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:03:57.513790  620795 cri.go:89] found id: ""
	I1213 12:03:57.513811  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.513820  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:03:57.513826  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:03:57.513882  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:03:57.549685  620795 cri.go:89] found id: ""
	I1213 12:03:57.549708  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.549716  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:03:57.549723  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:03:57.549784  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:03:57.575809  620795 cri.go:89] found id: ""
	I1213 12:03:57.575830  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.575839  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:03:57.575848  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:03:57.575860  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:03:57.645191  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:03:57.645229  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:03:57.662016  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:03:57.662048  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:03:57.724395  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:03:57.715919    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.716483    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.718246    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.718931    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.720750    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:03:57.715919    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.716483    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.718246    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.718931    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.720750    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:03:57.724433  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:03:57.724446  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:03:57.752976  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:03:57.753012  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:04:00.036301  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:02.037075  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:00.282268  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:00.369064  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:00.369151  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:00.446224  620795 cri.go:89] found id: ""
	I1213 12:04:00.446257  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.446267  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:00.446274  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:00.446398  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:00.492701  620795 cri.go:89] found id: ""
	I1213 12:04:00.492728  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.492737  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:00.492744  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:00.492814  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:00.537493  620795 cri.go:89] found id: ""
	I1213 12:04:00.537573  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.537600  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:00.537617  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:00.537703  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:00.567417  620795 cri.go:89] found id: ""
	I1213 12:04:00.567457  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.567467  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:00.567493  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:00.567660  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:00.597259  620795 cri.go:89] found id: ""
	I1213 12:04:00.597333  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.597358  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:00.597371  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:00.597453  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:00.624935  620795 cri.go:89] found id: ""
	I1213 12:04:00.625008  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.625032  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:00.625053  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:00.625125  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:00.656802  620795 cri.go:89] found id: ""
	I1213 12:04:00.656830  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.656846  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:00.656853  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:00.656924  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:00.684243  620795 cri.go:89] found id: ""
	I1213 12:04:00.684318  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.684342  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:00.684364  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:00.684406  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:00.755205  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:00.755244  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:00.772314  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:00.772345  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:00.841157  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:00.832743    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.833321    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.835282    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.835830    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.836909    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:00.832743    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.833321    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.835282    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.835830    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.836909    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:00.841236  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:00.841257  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:00.870321  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:00.870357  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:02.384998  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:04:02.445321  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:02.445354  620795 retry.go:31] will retry after 47.283712675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:03.403559  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:03.414405  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:03.414472  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:03.440207  620795 cri.go:89] found id: ""
	I1213 12:04:03.440275  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.440299  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:03.440320  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:03.440406  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:03.473860  620795 cri.go:89] found id: ""
	I1213 12:04:03.473906  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.473916  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:03.473923  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:03.474005  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:03.500069  620795 cri.go:89] found id: ""
	I1213 12:04:03.500102  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.500111  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:03.500118  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:03.500194  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:03.550253  620795 cri.go:89] found id: ""
	I1213 12:04:03.550329  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.550353  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:03.550372  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:03.550459  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:03.595628  620795 cri.go:89] found id: ""
	I1213 12:04:03.595713  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.595737  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:03.595757  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:03.595871  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:03.626718  620795 cri.go:89] found id: ""
	I1213 12:04:03.626796  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.626827  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:03.626849  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:03.626954  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:03.657254  620795 cri.go:89] found id: ""
	I1213 12:04:03.657281  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.657290  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:03.657297  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:03.657356  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:03.682193  620795 cri.go:89] found id: ""
	I1213 12:04:03.682268  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.682292  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:03.682315  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:03.682355  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:03.750002  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:03.741882    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.742330    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.743987    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.744602    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.746402    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:03.741882    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.742330    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.743987    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.744602    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.746402    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:03.750025  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:03.750039  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:03.779008  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:03.779046  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:03.807344  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:03.807424  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:03.879158  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:03.879201  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:04:04.537094  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:06.845581  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:04:06.913058  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:06.913091  622913 retry.go:31] will retry after 30.701510805s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:07.036960  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:05.886355  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:04:05.944754  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:05.944842  620795 retry.go:31] will retry after 33.803790372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:06.397350  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:06.407918  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:06.407990  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:06.436013  620795 cri.go:89] found id: ""
	I1213 12:04:06.436040  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.436049  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:06.436056  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:06.436121  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:06.462051  620795 cri.go:89] found id: ""
	I1213 12:04:06.462074  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.462083  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:06.462089  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:06.462147  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:06.487916  620795 cri.go:89] found id: ""
	I1213 12:04:06.487943  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.487952  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:06.487959  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:06.488027  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:06.514150  620795 cri.go:89] found id: ""
	I1213 12:04:06.514181  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.514190  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:06.514196  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:06.514255  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:06.567862  620795 cri.go:89] found id: ""
	I1213 12:04:06.567900  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.567910  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:06.567917  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:06.567977  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:06.615399  620795 cri.go:89] found id: ""
	I1213 12:04:06.615428  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.615446  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:06.615453  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:06.615546  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:06.645078  620795 cri.go:89] found id: ""
	I1213 12:04:06.645150  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.645174  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:06.645196  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:06.645278  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:06.673976  620795 cri.go:89] found id: ""
	I1213 12:04:06.674002  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.674011  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:06.674022  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:06.674067  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:06.703467  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:06.703504  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:06.731693  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:06.731721  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:06.801110  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:06.801154  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:06.817774  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:06.817804  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:06.899087  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:06.890513    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.891812    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.893652    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.893965    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.895397    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:06.890513    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.891812    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.893652    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.893965    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.895397    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:04:09.536141  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:09.755504  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:04:09.840522  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:09.840549  622913 retry.go:31] will retry after 18.501787354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:11.536619  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:09.400132  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:09.410430  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:09.410500  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:09.440067  620795 cri.go:89] found id: ""
	I1213 12:04:09.440090  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.440100  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:09.440107  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:09.440167  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:09.470041  620795 cri.go:89] found id: ""
	I1213 12:04:09.470062  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.470071  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:09.470078  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:09.470135  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:09.496421  620795 cri.go:89] found id: ""
	I1213 12:04:09.496444  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.496453  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:09.496459  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:09.496516  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:09.535210  620795 cri.go:89] found id: ""
	I1213 12:04:09.535233  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.535241  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:09.535248  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:09.535322  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:09.593867  620795 cri.go:89] found id: ""
	I1213 12:04:09.593894  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.593905  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:09.593912  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:09.593967  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:09.633869  620795 cri.go:89] found id: ""
	I1213 12:04:09.633895  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.633904  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:09.633911  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:09.633967  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:09.660082  620795 cri.go:89] found id: ""
	I1213 12:04:09.660104  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.660113  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:09.660119  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:09.660180  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:09.686975  620795 cri.go:89] found id: ""
	I1213 12:04:09.687005  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.687013  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:09.687023  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:09.687035  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:09.756960  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:09.756994  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:09.779895  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:09.779929  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:09.858208  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:09.850094    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.850752    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.852494    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.853050    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.854767    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:09.850094    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.850752    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.852494    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.853050    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.854767    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:09.858229  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:09.858243  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:09.886438  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:09.886472  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:11.571741  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:04:11.635299  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:11.635338  620795 retry.go:31] will retry after 28.848947099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:12.418247  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:12.428921  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:12.428996  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:12.453422  620795 cri.go:89] found id: ""
	I1213 12:04:12.453447  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.453455  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:12.453462  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:12.453523  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:12.482791  620795 cri.go:89] found id: ""
	I1213 12:04:12.482818  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.482827  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:12.482834  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:12.482892  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:12.509185  620795 cri.go:89] found id: ""
	I1213 12:04:12.509207  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.509216  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:12.509222  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:12.509281  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:12.555782  620795 cri.go:89] found id: ""
	I1213 12:04:12.555810  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.555820  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:12.555868  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:12.555953  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:12.609661  620795 cri.go:89] found id: ""
	I1213 12:04:12.609682  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.609691  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:12.609697  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:12.609753  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:12.636223  620795 cri.go:89] found id: ""
	I1213 12:04:12.636251  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.636268  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:12.636275  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:12.636335  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:12.663456  620795 cri.go:89] found id: ""
	I1213 12:04:12.663484  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.663493  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:12.663499  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:12.663583  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:12.688687  620795 cri.go:89] found id: ""
	I1213 12:04:12.688714  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.688723  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:12.688733  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:12.688745  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:12.705209  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:12.705240  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:12.766977  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:12.758035    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.758936    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.760623    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.761225    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.762917    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:12.758035    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.758936    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.760623    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.761225    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.762917    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:12.767041  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:12.767064  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:12.795358  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:12.795396  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:12.823112  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:12.823143  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:04:14.037178  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:16.536405  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:15.388432  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:15.398781  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:15.398905  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:15.425880  620795 cri.go:89] found id: ""
	I1213 12:04:15.425920  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.425929  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:15.425935  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:15.426005  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:15.451424  620795 cri.go:89] found id: ""
	I1213 12:04:15.451467  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.451477  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:15.451486  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:15.451583  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:15.476481  620795 cri.go:89] found id: ""
	I1213 12:04:15.476525  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.476534  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:15.476541  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:15.476612  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:15.502062  620795 cri.go:89] found id: ""
	I1213 12:04:15.502088  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.502097  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:15.502104  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:15.502173  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:15.588057  620795 cri.go:89] found id: ""
	I1213 12:04:15.588132  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.588155  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:15.588175  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:15.588279  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:15.616479  620795 cri.go:89] found id: ""
	I1213 12:04:15.616506  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.616519  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:15.616526  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:15.616602  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:15.649712  620795 cri.go:89] found id: ""
	I1213 12:04:15.649789  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.649813  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:15.649827  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:15.649912  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:15.675926  620795 cri.go:89] found id: ""
	I1213 12:04:15.675995  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.676019  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:15.676034  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:15.676049  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:15.692725  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:15.692755  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:15.759900  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:15.751635    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.752539    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.754270    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.754749    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.756378    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:15.751635    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.752539    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.754270    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.754749    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.756378    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:15.759963  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:15.759989  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:15.789315  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:15.789425  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:15.818647  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:15.818675  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:18.385812  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:18.396389  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:18.396461  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:18.422777  620795 cri.go:89] found id: ""
	I1213 12:04:18.422800  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.422808  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:18.422814  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:18.422873  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:18.448579  620795 cri.go:89] found id: ""
	I1213 12:04:18.448607  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.448616  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:18.448622  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:18.448677  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:18.474629  620795 cri.go:89] found id: ""
	I1213 12:04:18.474707  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.474744  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:18.474768  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:18.474859  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:18.499793  620795 cri.go:89] found id: ""
	I1213 12:04:18.499819  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.499828  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:18.499837  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:18.499894  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:18.531333  620795 cri.go:89] found id: ""
	I1213 12:04:18.531368  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.531377  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:18.531383  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:18.531450  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:18.583893  620795 cri.go:89] found id: ""
	I1213 12:04:18.583923  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.583932  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:18.583939  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:18.584008  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:18.620082  620795 cri.go:89] found id: ""
	I1213 12:04:18.620120  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.620129  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:18.620135  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:18.620210  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:18.647112  620795 cri.go:89] found id: ""
	I1213 12:04:18.647137  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.647145  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:18.647155  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:18.647167  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:18.712791  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:18.712833  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:18.728892  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:18.728920  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:18.793078  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:18.784898    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.785594    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.787226    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.787863    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.789553    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:18.784898    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.785594    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.787226    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.787863    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.789553    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:18.793150  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:18.793172  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:18.821911  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:18.821947  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:04:18.537035  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:20.537076  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:23.036959  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:21.353995  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:21.364153  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:21.364265  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:21.389593  620795 cri.go:89] found id: ""
	I1213 12:04:21.389673  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.389690  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:21.389698  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:21.389773  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:21.418684  620795 cri.go:89] found id: ""
	I1213 12:04:21.418706  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.418715  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:21.418722  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:21.418778  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:21.442724  620795 cri.go:89] found id: ""
	I1213 12:04:21.442799  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.442822  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:21.442841  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:21.442927  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:21.472117  620795 cri.go:89] found id: ""
	I1213 12:04:21.472141  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.472150  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:21.472156  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:21.472213  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:21.501589  620795 cri.go:89] found id: ""
	I1213 12:04:21.501612  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.501621  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:21.501627  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:21.501688  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:21.563954  620795 cri.go:89] found id: ""
	I1213 12:04:21.564023  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.564046  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:21.564069  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:21.564151  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:21.612229  620795 cri.go:89] found id: ""
	I1213 12:04:21.612263  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.612273  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:21.612280  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:21.612339  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:21.639602  620795 cri.go:89] found id: ""
	I1213 12:04:21.639636  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.639645  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:21.639655  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:21.639669  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:21.705516  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:21.705552  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:21.722491  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:21.722521  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:21.783641  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:21.775744    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.776319    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.777813    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.778191    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.779744    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:21.775744    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.776319    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.777813    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.778191    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.779744    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:21.783663  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:21.783676  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:21.811307  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:21.811340  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:04:25.037157  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:27.037243  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:24.340508  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:24.351403  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:24.351482  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:24.382302  620795 cri.go:89] found id: ""
	I1213 12:04:24.382379  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.382404  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:24.382425  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:24.382538  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:24.408839  620795 cri.go:89] found id: ""
	I1213 12:04:24.408862  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.408871  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:24.408878  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:24.408936  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:24.435623  620795 cri.go:89] found id: ""
	I1213 12:04:24.435651  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.435661  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:24.435667  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:24.435727  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:24.461121  620795 cri.go:89] found id: ""
	I1213 12:04:24.461149  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.461158  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:24.461165  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:24.461251  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:24.486111  620795 cri.go:89] found id: ""
	I1213 12:04:24.486144  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.486153  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:24.486176  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:24.486257  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:24.511493  620795 cri.go:89] found id: ""
	I1213 12:04:24.511567  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.511578  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:24.511585  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:24.511646  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:24.546004  620795 cri.go:89] found id: ""
	I1213 12:04:24.546029  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.546052  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:24.546059  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:24.546129  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:24.573601  620795 cri.go:89] found id: ""
	I1213 12:04:24.573677  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.573699  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:24.573720  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:24.573758  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:24.651738  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:24.651779  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:24.669002  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:24.669035  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:24.734744  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:24.726695    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.727312    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.729032    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.729495    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.731022    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:24.726695    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.727312    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.729032    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.729495    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.731022    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:24.734767  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:24.734780  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:24.763652  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:24.763687  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:27.296287  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:27.306558  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:27.306632  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:27.331288  620795 cri.go:89] found id: ""
	I1213 12:04:27.331315  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.331324  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:27.331331  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:27.331388  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:27.357587  620795 cri.go:89] found id: ""
	I1213 12:04:27.357611  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.357620  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:27.357626  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:27.357681  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:27.383604  620795 cri.go:89] found id: ""
	I1213 12:04:27.383628  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.383637  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:27.383644  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:27.383699  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:27.408104  620795 cri.go:89] found id: ""
	I1213 12:04:27.408183  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.408199  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:27.408207  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:27.408273  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:27.434284  620795 cri.go:89] found id: ""
	I1213 12:04:27.434309  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.434318  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:27.434325  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:27.434389  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:27.459356  620795 cri.go:89] found id: ""
	I1213 12:04:27.459382  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.459391  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:27.459399  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:27.459457  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:27.484476  620795 cri.go:89] found id: ""
	I1213 12:04:27.484543  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.484558  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:27.484565  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:27.484630  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:27.510910  620795 cri.go:89] found id: ""
	I1213 12:04:27.510937  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.510946  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:27.510955  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:27.510967  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:27.543054  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:27.543085  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:27.641750  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:27.634259    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.634796    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.636509    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.637087    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.638180    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:27.634259    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.634796    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.636509    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.637087    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.638180    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:27.641818  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:27.641838  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:27.671375  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:27.671412  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:27.701704  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:27.701735  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:28.342721  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:04:28.405775  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:28.405881  622913 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 12:04:29.536294  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:31.536581  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:30.268871  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:30.279472  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:30.279561  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:30.305479  620795 cri.go:89] found id: ""
	I1213 12:04:30.305504  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.305513  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:30.305520  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:30.305577  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:30.330879  620795 cri.go:89] found id: ""
	I1213 12:04:30.330904  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.330914  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:30.330920  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:30.330978  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:30.358794  620795 cri.go:89] found id: ""
	I1213 12:04:30.358821  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.358830  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:30.358837  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:30.358899  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:30.384574  620795 cri.go:89] found id: ""
	I1213 12:04:30.384648  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.384662  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:30.384669  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:30.384728  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:30.409348  620795 cri.go:89] found id: ""
	I1213 12:04:30.409374  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.409383  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:30.409390  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:30.409460  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:30.435261  620795 cri.go:89] found id: ""
	I1213 12:04:30.435286  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.435295  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:30.435302  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:30.435357  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:30.459810  620795 cri.go:89] found id: ""
	I1213 12:04:30.459834  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.459843  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:30.459849  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:30.459906  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:30.485697  620795 cri.go:89] found id: ""
	I1213 12:04:30.485720  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.485728  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:30.485738  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:30.485749  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:30.513499  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:30.513534  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:30.574739  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:30.574767  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:30.658042  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:30.658078  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:30.678263  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:30.678291  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:30.741695  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:30.733736    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.734524    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.736026    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.736488    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.737955    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:30.733736    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.734524    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.736026    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.736488    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.737955    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:33.242096  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:33.253053  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:33.253146  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:33.279722  620795 cri.go:89] found id: ""
	I1213 12:04:33.279748  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.279756  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:33.279764  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:33.279820  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:33.306092  620795 cri.go:89] found id: ""
	I1213 12:04:33.306129  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.306139  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:33.306163  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:33.306252  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:33.332772  620795 cri.go:89] found id: ""
	I1213 12:04:33.332796  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.332813  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:33.332819  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:33.332882  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:33.367716  620795 cri.go:89] found id: ""
	I1213 12:04:33.367744  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.367754  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:33.367760  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:33.367822  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:33.400175  620795 cri.go:89] found id: ""
	I1213 12:04:33.400242  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.400258  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:33.400266  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:33.400325  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:33.424852  620795 cri.go:89] found id: ""
	I1213 12:04:33.424877  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.424887  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:33.424894  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:33.424984  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:33.453556  620795 cri.go:89] found id: ""
	I1213 12:04:33.453581  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.453590  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:33.453597  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:33.453653  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:33.479131  620795 cri.go:89] found id: ""
	I1213 12:04:33.479156  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.479165  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:33.479175  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:33.479187  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:33.549906  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:33.550637  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:33.572706  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:33.572863  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:33.662497  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:33.653770    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.654281    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.656228    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.656866    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.658492    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:33.653770    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.654281    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.656228    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.656866    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.658492    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:33.662522  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:33.662535  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:33.692067  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:33.692111  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:04:33.536622  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:36.036352  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:37.615506  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:04:37.688522  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:37.688627  622913 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 12:04:38.037102  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:36.220187  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:36.230829  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:36.230906  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:36.260247  620795 cri.go:89] found id: ""
	I1213 12:04:36.260271  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.260280  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:36.260286  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:36.260342  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:36.285940  620795 cri.go:89] found id: ""
	I1213 12:04:36.285973  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.285982  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:36.285988  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:36.286059  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:36.311531  620795 cri.go:89] found id: ""
	I1213 12:04:36.311553  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.311561  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:36.311568  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:36.311633  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:36.336755  620795 cri.go:89] found id: ""
	I1213 12:04:36.336849  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.336865  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:36.336873  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:36.336933  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:36.361652  620795 cri.go:89] found id: ""
	I1213 12:04:36.361676  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.361684  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:36.361690  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:36.361748  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:36.392507  620795 cri.go:89] found id: ""
	I1213 12:04:36.392530  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.392539  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:36.392545  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:36.392601  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:36.418503  620795 cri.go:89] found id: ""
	I1213 12:04:36.418526  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.418535  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:36.418540  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:36.418614  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:36.444832  620795 cri.go:89] found id: ""
	I1213 12:04:36.444856  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.444865  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:36.444874  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:36.444891  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:36.515523  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:36.515566  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:36.535671  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:36.535699  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:36.655383  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:36.646224    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.647083    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.648816    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.649375    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.651021    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:36.646224    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.647083    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.648816    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.649375    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.651021    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:36.655406  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:36.655421  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:36.684176  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:36.684212  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:39.215366  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:39.225843  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:39.225914  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	W1213 12:04:40.037338  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:42.538150  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:42.683554  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:04:42.744769  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:42.744869  622913 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 12:04:42.747993  622913 out.go:179] * Enabled addons: 
	I1213 12:04:42.750740  622913 addons.go:530] duration metric: took 1m31.849485278s for enable addons: enabled=[]
	I1213 12:04:39.251825  620795 cri.go:89] found id: ""
	I1213 12:04:39.251850  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.251860  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:39.251867  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:39.251927  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:39.280966  620795 cri.go:89] found id: ""
	I1213 12:04:39.280991  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.281000  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:39.281007  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:39.281063  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:39.305488  620795 cri.go:89] found id: ""
	I1213 12:04:39.305511  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.305520  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:39.305526  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:39.305583  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:39.330461  620795 cri.go:89] found id: ""
	I1213 12:04:39.330484  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.330493  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:39.330500  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:39.330556  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:39.355410  620795 cri.go:89] found id: ""
	I1213 12:04:39.355483  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.355507  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:39.355565  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:39.355706  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:39.384890  620795 cri.go:89] found id: ""
	I1213 12:04:39.384916  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.384926  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:39.384933  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:39.385017  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:39.409735  620795 cri.go:89] found id: ""
	I1213 12:04:39.409758  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.409767  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:39.409773  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:39.409833  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:39.439648  620795 cri.go:89] found id: ""
	I1213 12:04:39.439673  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.439685  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:39.439695  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:39.439706  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:39.505768  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:39.505803  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:39.525572  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:39.525602  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:39.624619  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:39.616542    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.617459    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.619080    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.619382    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.620943    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:39.616542    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.617459    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.619080    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.619382    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.620943    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:39.624643  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:39.624656  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:39.653269  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:39.653306  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:39.749621  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:04:39.805957  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:39.806064  620795 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 12:04:40.484759  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:04:40.549677  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:40.549776  620795 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 12:04:42.182348  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:42.195718  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:42.195860  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:42.224999  620795 cri.go:89] found id: ""
	I1213 12:04:42.225044  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.225058  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:42.225067  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:42.225192  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:42.254835  620795 cri.go:89] found id: ""
	I1213 12:04:42.254913  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.254949  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:42.254975  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:42.255077  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:42.283814  620795 cri.go:89] found id: ""
	I1213 12:04:42.283889  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.283916  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:42.283931  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:42.284014  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:42.315795  620795 cri.go:89] found id: ""
	I1213 12:04:42.315823  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.315859  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:42.315871  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:42.315954  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:42.342987  620795 cri.go:89] found id: ""
	I1213 12:04:42.343026  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.343035  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:42.343042  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:42.343114  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:42.368935  620795 cri.go:89] found id: ""
	I1213 12:04:42.368969  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.368978  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:42.368986  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:42.369052  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:42.398633  620795 cri.go:89] found id: ""
	I1213 12:04:42.398703  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.398727  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:42.398747  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:42.398834  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:42.424223  620795 cri.go:89] found id: ""
	I1213 12:04:42.424299  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.424324  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:42.424342  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:42.424367  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:42.453160  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:42.453198  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:42.486810  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:42.486840  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:42.567003  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:42.567043  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:42.606556  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:42.606591  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:42.678272  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:42.669759    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.670194    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.671849    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.672446    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.673383    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:42.669759    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.670194    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.671849    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.672446    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.673383    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:04:45.037213  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:47.536268  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:45.178582  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:45.193685  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:45.193792  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:45.236374  620795 cri.go:89] found id: ""
	I1213 12:04:45.236402  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.236411  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:45.236419  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:45.236487  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:45.279160  620795 cri.go:89] found id: ""
	I1213 12:04:45.279193  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.279203  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:45.279210  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:45.279281  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:45.308966  620795 cri.go:89] found id: ""
	I1213 12:04:45.308991  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.309000  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:45.309006  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:45.309065  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:45.337083  620795 cri.go:89] found id: ""
	I1213 12:04:45.337110  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.337119  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:45.337126  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:45.337212  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:45.366596  620795 cri.go:89] found id: ""
	I1213 12:04:45.366619  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.366628  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:45.366635  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:45.366694  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:45.391548  620795 cri.go:89] found id: ""
	I1213 12:04:45.391572  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.391581  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:45.391588  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:45.391649  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:45.418598  620795 cri.go:89] found id: ""
	I1213 12:04:45.418619  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.418628  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:45.418635  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:45.418700  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:45.448270  620795 cri.go:89] found id: ""
	I1213 12:04:45.448292  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.448301  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:45.448310  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:45.448321  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:45.478882  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:45.478907  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:45.548829  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:45.548916  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:45.567213  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:45.567382  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:45.681775  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:45.673956    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.674517    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.676147    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.676639    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.678185    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:45.673956    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.674517    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.676147    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.676639    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.678185    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:45.681800  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:45.681816  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:48.211634  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:48.222293  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:48.222364  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:48.249683  620795 cri.go:89] found id: ""
	I1213 12:04:48.249707  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.249715  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:48.249722  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:48.249785  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:48.277977  620795 cri.go:89] found id: ""
	I1213 12:04:48.277999  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.278009  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:48.278015  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:48.278072  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:48.304052  620795 cri.go:89] found id: ""
	I1213 12:04:48.304080  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.304089  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:48.304096  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:48.304153  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:48.334039  620795 cri.go:89] found id: ""
	I1213 12:04:48.334066  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.334075  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:48.334087  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:48.334151  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:48.364623  620795 cri.go:89] found id: ""
	I1213 12:04:48.364646  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.364654  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:48.364661  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:48.364723  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:48.389613  620795 cri.go:89] found id: ""
	I1213 12:04:48.389684  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.389707  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:48.389718  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:48.389797  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:48.418439  620795 cri.go:89] found id: ""
	I1213 12:04:48.418467  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.418477  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:48.418485  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:48.418544  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:48.446312  620795 cri.go:89] found id: ""
	I1213 12:04:48.446341  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.446350  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:48.446360  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:48.446372  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:48.463031  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:48.463116  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:48.558736  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:48.546104    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.546489    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.550180    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.550521    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.554948    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:48.546104    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.546489    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.550180    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.550521    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.554948    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:48.558767  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:48.558782  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:48.606808  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:48.606885  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:48.638169  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:48.638199  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:49.729332  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:04:49.791669  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:49.791778  620795 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 12:04:49.794717  620795 out.go:179] * Enabled addons: 
	W1213 12:04:50.037029  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:52.037265  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:49.797659  620795 addons.go:530] duration metric: took 1m53.008142261s for enable addons: enabled=[]
	I1213 12:04:51.210580  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:51.221809  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:51.221877  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:51.247182  620795 cri.go:89] found id: ""
	I1213 12:04:51.247259  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.247282  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:51.247301  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:51.247396  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:51.275541  620795 cri.go:89] found id: ""
	I1213 12:04:51.275608  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.275623  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:51.275631  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:51.275695  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:51.300774  620795 cri.go:89] found id: ""
	I1213 12:04:51.300866  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.300889  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:51.300902  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:51.300973  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:51.330039  620795 cri.go:89] found id: ""
	I1213 12:04:51.330064  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.330074  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:51.330080  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:51.330152  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:51.358455  620795 cri.go:89] found id: ""
	I1213 12:04:51.358482  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.358491  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:51.358497  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:51.358556  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:51.387907  620795 cri.go:89] found id: ""
	I1213 12:04:51.387933  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.387942  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:51.387948  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:51.388011  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:51.414050  620795 cri.go:89] found id: ""
	I1213 12:04:51.414075  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.414084  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:51.414091  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:51.414148  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:51.440682  620795 cri.go:89] found id: ""
	I1213 12:04:51.440715  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.440729  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:51.440739  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:51.440752  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:51.502275  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:51.494090    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.494838    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.496561    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.497152    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.498687    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:51.494090    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.494838    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.496561    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.497152    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.498687    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:51.502296  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:51.502308  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:51.533683  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:51.533722  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:51.590439  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:51.590468  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:51.668678  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:51.668719  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:54.186166  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:54.196649  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:54.196718  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:54.221630  620795 cri.go:89] found id: ""
	I1213 12:04:54.221656  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.221665  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:54.221672  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:54.221729  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1213 12:04:54.537026  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:56.537082  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:54.246332  620795 cri.go:89] found id: ""
	I1213 12:04:54.246354  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.246362  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:54.246368  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:54.246425  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:54.274363  620795 cri.go:89] found id: ""
	I1213 12:04:54.274385  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.274396  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:54.274405  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:54.274465  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:54.299013  620795 cri.go:89] found id: ""
	I1213 12:04:54.299036  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.299045  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:54.299051  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:54.299115  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:54.325098  620795 cri.go:89] found id: ""
	I1213 12:04:54.325123  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.325133  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:54.325140  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:54.325200  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:54.350290  620795 cri.go:89] found id: ""
	I1213 12:04:54.350318  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.350327  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:54.350334  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:54.350394  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:54.377186  620795 cri.go:89] found id: ""
	I1213 12:04:54.377209  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.377218  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:54.377224  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:54.377283  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:54.409137  620795 cri.go:89] found id: ""
	I1213 12:04:54.409164  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.409174  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:54.409184  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:54.409196  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:54.426177  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:54.426207  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:54.491873  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:54.483806    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.484379    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.486107    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.486575    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.488295    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:54.483806    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.484379    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.486107    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.486575    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.488295    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:54.491896  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:54.491909  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:54.521061  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:54.521153  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:54.580593  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:54.580623  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:57.166168  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:57.177178  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:57.177255  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:57.209135  620795 cri.go:89] found id: ""
	I1213 12:04:57.209170  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.209179  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:57.209186  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:57.209254  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:57.236323  620795 cri.go:89] found id: ""
	I1213 12:04:57.236359  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.236368  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:57.236375  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:57.236433  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:57.261970  620795 cri.go:89] found id: ""
	I1213 12:04:57.261992  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.262001  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:57.262007  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:57.262064  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:57.287149  620795 cri.go:89] found id: ""
	I1213 12:04:57.287171  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.287179  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:57.287186  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:57.287242  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:57.312282  620795 cri.go:89] found id: ""
	I1213 12:04:57.312307  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.312316  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:57.312322  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:57.312380  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:57.341454  620795 cri.go:89] found id: ""
	I1213 12:04:57.341480  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.341489  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:57.341496  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:57.341559  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:57.366694  620795 cri.go:89] found id: ""
	I1213 12:04:57.366718  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.366729  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:57.366736  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:57.366795  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:57.392434  620795 cri.go:89] found id: ""
	I1213 12:04:57.392459  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.392468  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:57.392478  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:57.392490  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:57.426595  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:57.426622  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:57.490950  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:57.490984  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:57.508294  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:57.508326  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:57.637638  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:57.628307    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.629849    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.630282    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.632060    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.632815    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:57.628307    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.629849    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.630282    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.632060    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.632815    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:57.637717  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:57.637746  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:04:59.037033  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:01.536339  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:00.166037  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:00.211490  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:00.212114  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:00.294178  620795 cri.go:89] found id: ""
	I1213 12:05:00.294201  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.294210  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:00.294217  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:00.294285  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:00.376480  620795 cri.go:89] found id: ""
	I1213 12:05:00.376506  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.376516  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:00.376523  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:00.376593  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:00.416213  620795 cri.go:89] found id: ""
	I1213 12:05:00.416240  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.416250  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:00.416261  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:00.416329  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:00.449590  620795 cri.go:89] found id: ""
	I1213 12:05:00.449620  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.449629  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:00.449637  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:00.449722  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:00.479461  620795 cri.go:89] found id: ""
	I1213 12:05:00.479486  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.479495  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:00.479502  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:00.479589  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:00.509094  620795 cri.go:89] found id: ""
	I1213 12:05:00.509123  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.509132  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:00.509138  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:00.509204  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:00.583923  620795 cri.go:89] found id: ""
	I1213 12:05:00.583952  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.583962  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:00.583969  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:00.584049  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:00.624268  620795 cri.go:89] found id: ""
	I1213 12:05:00.624299  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.624309  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:00.624322  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:00.624334  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:00.701394  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:00.692593    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.693524    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.695465    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.695924    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.697491    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:00.692593    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.693524    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.695465    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.695924    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.697491    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:00.701419  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:00.701432  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:00.730125  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:00.730170  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:00.760465  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:00.760494  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:00.826577  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:00.826619  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:03.345642  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:03.359010  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:03.359082  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:03.391792  620795 cri.go:89] found id: ""
	I1213 12:05:03.391816  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.391825  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:03.391832  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:03.391889  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:03.418730  620795 cri.go:89] found id: ""
	I1213 12:05:03.418759  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.418768  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:03.418774  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:03.418831  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:03.447034  620795 cri.go:89] found id: ""
	I1213 12:05:03.447062  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.447070  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:03.447077  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:03.447137  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:03.471737  620795 cri.go:89] found id: ""
	I1213 12:05:03.471763  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.471772  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:03.471778  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:03.471832  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:03.496618  620795 cri.go:89] found id: ""
	I1213 12:05:03.496641  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.496650  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:03.496656  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:03.496721  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:03.538834  620795 cri.go:89] found id: ""
	I1213 12:05:03.538855  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.538901  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:03.538915  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:03.539006  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:03.577353  620795 cri.go:89] found id: ""
	I1213 12:05:03.577375  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.577437  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:03.577445  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:03.577590  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:03.613163  620795 cri.go:89] found id: ""
	I1213 12:05:03.613234  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.613247  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:03.613257  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:03.613296  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:03.652148  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:03.652174  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:03.718838  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:03.718879  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:03.736159  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:03.736189  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:03.801478  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:03.792834    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.793250    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.794944    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.795726    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.797245    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:03.792834    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.793250    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.794944    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.795726    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.797245    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:03.801504  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:03.801519  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:05:03.537034  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:06.036238  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:08.037112  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:06.330711  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:06.341136  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:06.341246  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:06.366066  620795 cri.go:89] found id: ""
	I1213 12:05:06.366099  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.366108  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:06.366114  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:06.366178  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:06.394525  620795 cri.go:89] found id: ""
	I1213 12:05:06.394563  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.394573  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:06.394580  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:06.394649  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:06.424244  620795 cri.go:89] found id: ""
	I1213 12:05:06.424312  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.424336  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:06.424357  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:06.424449  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:06.450497  620795 cri.go:89] found id: ""
	I1213 12:05:06.450529  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.450538  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:06.450545  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:06.450614  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:06.475735  620795 cri.go:89] found id: ""
	I1213 12:05:06.475759  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.475768  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:06.475774  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:06.475835  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:06.501224  620795 cri.go:89] found id: ""
	I1213 12:05:06.501248  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.501257  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:06.501263  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:06.501322  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:06.548385  620795 cri.go:89] found id: ""
	I1213 12:05:06.548410  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.548419  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:06.548425  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:06.548498  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:06.613365  620795 cri.go:89] found id: ""
	I1213 12:05:06.613444  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.613469  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:06.613490  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:06.613525  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:06.642036  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:06.642067  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:06.675194  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:06.675218  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:06.743889  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:06.743933  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:06.760968  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:06.761004  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:06.828998  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:06.821066    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.821670    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.823321    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.823818    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.825418    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:06.821066    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.821670    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.823321    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.823818    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.825418    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:05:10.037152  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:12.536415  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:09.329981  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:09.340577  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:09.340644  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:09.368902  620795 cri.go:89] found id: ""
	I1213 12:05:09.368926  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.368935  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:09.368941  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:09.369004  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:09.397232  620795 cri.go:89] found id: ""
	I1213 12:05:09.397263  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.397273  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:09.397280  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:09.397353  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:09.424425  620795 cri.go:89] found id: ""
	I1213 12:05:09.424455  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.424465  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:09.424471  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:09.424529  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:09.449435  620795 cri.go:89] found id: ""
	I1213 12:05:09.449457  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.449466  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:09.449472  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:09.449534  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:09.473489  620795 cri.go:89] found id: ""
	I1213 12:05:09.473512  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.473521  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:09.473527  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:09.473584  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:09.503533  620795 cri.go:89] found id: ""
	I1213 12:05:09.503560  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.503569  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:09.503576  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:09.503632  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:09.569217  620795 cri.go:89] found id: ""
	I1213 12:05:09.569286  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.569312  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:09.569331  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:09.569431  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:09.616563  620795 cri.go:89] found id: ""
	I1213 12:05:09.616632  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.616663  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:09.616686  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:09.616726  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:09.645190  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:09.645217  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:09.710725  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:09.710760  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:09.727200  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:09.727231  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:09.793579  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:09.785934    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.786467    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.787974    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.788480    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.790090    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:09.785934    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.786467    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.787974    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.788480    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.790090    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:09.793611  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:09.793625  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:12.321617  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:12.332442  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:12.332517  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:12.357812  620795 cri.go:89] found id: ""
	I1213 12:05:12.357835  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.357844  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:12.357851  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:12.357912  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:12.383803  620795 cri.go:89] found id: ""
	I1213 12:05:12.383827  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.383836  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:12.383842  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:12.383902  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:12.408966  620795 cri.go:89] found id: ""
	I1213 12:05:12.409044  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.409061  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:12.409069  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:12.409183  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:12.438466  620795 cri.go:89] found id: ""
	I1213 12:05:12.438491  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.438499  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:12.438506  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:12.438562  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:12.468347  620795 cri.go:89] found id: ""
	I1213 12:05:12.468375  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.468385  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:12.468391  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:12.468455  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:12.493833  620795 cri.go:89] found id: ""
	I1213 12:05:12.493860  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.493869  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:12.493876  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:12.493936  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:12.540091  620795 cri.go:89] found id: ""
	I1213 12:05:12.540120  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.540130  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:12.540137  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:12.540202  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:12.593138  620795 cri.go:89] found id: ""
	I1213 12:05:12.593165  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.593174  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:12.593184  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:12.593195  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:12.670751  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:12.670790  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:12.688162  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:12.688196  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:12.753953  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:12.745930    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.746540    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.748217    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.748692    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.750290    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:12.745930    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.746540    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.748217    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.748692    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.750290    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:12.753978  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:12.753990  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:12.782410  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:12.782447  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:05:14.537113  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:17.037129  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:15.314766  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:15.325177  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:15.325244  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:15.350233  620795 cri.go:89] found id: ""
	I1213 12:05:15.350259  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.350269  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:15.350276  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:15.350332  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:15.375095  620795 cri.go:89] found id: ""
	I1213 12:05:15.375121  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.375131  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:15.375138  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:15.375198  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:15.400509  620795 cri.go:89] found id: ""
	I1213 12:05:15.400531  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.400539  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:15.400545  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:15.400604  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:15.429727  620795 cri.go:89] found id: ""
	I1213 12:05:15.429749  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.429758  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:15.429765  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:15.429818  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:15.455300  620795 cri.go:89] found id: ""
	I1213 12:05:15.455321  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.455330  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:15.455336  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:15.455393  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:15.480516  620795 cri.go:89] found id: ""
	I1213 12:05:15.480540  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.480549  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:15.480556  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:15.480617  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:15.508281  620795 cri.go:89] found id: ""
	I1213 12:05:15.508358  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.508375  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:15.508382  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:15.508453  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:15.569260  620795 cri.go:89] found id: ""
	I1213 12:05:15.569286  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.569295  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:15.569304  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:15.569317  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:15.653590  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:15.653630  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:15.670770  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:15.670805  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:15.734152  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:15.725752    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.726494    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.728223    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.728860    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.730656    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:15.725752    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.726494    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.728223    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.728860    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.730656    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:15.734221  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:15.734248  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:15.762906  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:15.762941  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:18.292789  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:18.303334  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:18.303410  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:18.329348  620795 cri.go:89] found id: ""
	I1213 12:05:18.329372  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.329382  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:18.329389  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:18.329455  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:18.358617  620795 cri.go:89] found id: ""
	I1213 12:05:18.358638  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.358647  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:18.358653  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:18.358710  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:18.383565  620795 cri.go:89] found id: ""
	I1213 12:05:18.383589  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.383597  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:18.383603  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:18.383666  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:18.409351  620795 cri.go:89] found id: ""
	I1213 12:05:18.409378  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.409387  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:18.409394  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:18.409456  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:18.435771  620795 cri.go:89] found id: ""
	I1213 12:05:18.435797  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.435806  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:18.435813  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:18.435875  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:18.464513  620795 cri.go:89] found id: ""
	I1213 12:05:18.464539  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.464549  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:18.464556  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:18.464659  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:18.490219  620795 cri.go:89] found id: ""
	I1213 12:05:18.490244  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.490252  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:18.490260  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:18.490317  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:18.532969  620795 cri.go:89] found id: ""
	I1213 12:05:18.532995  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.533004  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:18.533013  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:18.533027  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:18.595123  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:18.595154  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:18.672161  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:18.672201  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:18.689194  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:18.689222  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:18.754503  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:18.745575    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.746298    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.748026    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.748666    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.750610    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:18.745575    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.746298    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.748026    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.748666    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.750610    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:18.754526  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:18.754539  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:05:19.537079  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:22.037194  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:21.283365  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:21.294092  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:21.294183  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:21.321526  620795 cri.go:89] found id: ""
	I1213 12:05:21.321549  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.321559  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:21.321565  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:21.321622  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:21.349919  620795 cri.go:89] found id: ""
	I1213 12:05:21.349943  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.349952  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:21.349958  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:21.350021  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:21.379881  620795 cri.go:89] found id: ""
	I1213 12:05:21.379906  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.379915  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:21.379922  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:21.379982  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:21.405656  620795 cri.go:89] found id: ""
	I1213 12:05:21.405679  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.405687  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:21.405694  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:21.405754  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:21.435716  620795 cri.go:89] found id: ""
	I1213 12:05:21.435752  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.435762  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:21.435769  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:21.435839  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:21.461176  620795 cri.go:89] found id: ""
	I1213 12:05:21.461199  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.461207  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:21.461214  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:21.461271  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:21.487321  620795 cri.go:89] found id: ""
	I1213 12:05:21.487357  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.487366  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:21.487372  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:21.487438  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:21.513663  620795 cri.go:89] found id: ""
	I1213 12:05:21.513687  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.513696  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:21.513706  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:21.513740  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:21.547538  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:21.547713  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:21.648986  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:21.641895    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.642288    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.643954    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.644494    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.645453    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:21.641895    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.642288    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.643954    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.644494    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.645453    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:21.649007  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:21.649020  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:21.676895  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:21.676929  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:21.706237  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:21.706268  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:05:24.536202  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:26.537127  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:24.271406  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:24.281916  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:24.281984  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:24.306547  620795 cri.go:89] found id: ""
	I1213 12:05:24.306570  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.306579  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:24.306586  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:24.306645  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:24.334194  620795 cri.go:89] found id: ""
	I1213 12:05:24.334218  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.334227  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:24.334234  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:24.334291  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:24.360113  620795 cri.go:89] found id: ""
	I1213 12:05:24.360139  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.360148  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:24.360154  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:24.360219  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:24.385854  620795 cri.go:89] found id: ""
	I1213 12:05:24.385879  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.385889  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:24.385896  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:24.385960  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:24.411999  620795 cri.go:89] found id: ""
	I1213 12:05:24.412025  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.412034  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:24.412042  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:24.412102  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:24.438300  620795 cri.go:89] found id: ""
	I1213 12:05:24.438325  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.438335  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:24.438347  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:24.438405  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:24.464325  620795 cri.go:89] found id: ""
	I1213 12:05:24.464351  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.464361  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:24.464369  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:24.464430  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:24.491896  620795 cri.go:89] found id: ""
	I1213 12:05:24.491920  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.491930  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:24.491939  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:24.491971  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:24.519363  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:24.519445  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:24.616473  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:24.616502  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:24.692608  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:24.692645  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:24.711650  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:24.711689  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:24.775602  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:24.767043    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.768309    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.769606    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.770273    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.771935    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:24.767043    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.768309    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.769606    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.770273    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.771935    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:27.275849  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:27.286597  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:27.286680  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:27.311787  620795 cri.go:89] found id: ""
	I1213 12:05:27.311813  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.311822  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:27.311829  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:27.311893  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:27.341056  620795 cri.go:89] found id: ""
	I1213 12:05:27.341123  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.341146  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:27.341160  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:27.341233  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:27.365944  620795 cri.go:89] found id: ""
	I1213 12:05:27.365978  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.365986  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:27.365993  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:27.366057  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:27.390576  620795 cri.go:89] found id: ""
	I1213 12:05:27.390611  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.390626  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:27.390633  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:27.390702  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:27.420415  620795 cri.go:89] found id: ""
	I1213 12:05:27.420439  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.420448  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:27.420454  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:27.420516  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:27.445745  620795 cri.go:89] found id: ""
	I1213 12:05:27.445812  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.445835  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:27.445853  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:27.445936  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:27.475470  620795 cri.go:89] found id: ""
	I1213 12:05:27.475508  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.475538  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:27.475547  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:27.475615  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:27.502195  620795 cri.go:89] found id: ""
	I1213 12:05:27.502222  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.502231  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:27.502240  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:27.502252  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:27.597636  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:27.597744  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:27.629736  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:27.629763  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:27.694305  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:27.686679    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.687417    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.688918    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.689354    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.690840    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:27.686679    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.687417    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.688918    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.689354    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.690840    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:27.694327  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:27.694339  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:27.723090  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:27.723129  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:05:29.037051  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:31.536823  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:30.253217  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:30.264373  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:30.264446  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:30.290413  620795 cri.go:89] found id: ""
	I1213 12:05:30.290440  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.290450  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:30.290457  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:30.290517  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:30.318052  620795 cri.go:89] found id: ""
	I1213 12:05:30.318079  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.318096  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:30.318104  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:30.318172  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:30.343233  620795 cri.go:89] found id: ""
	I1213 12:05:30.343267  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.343277  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:30.343283  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:30.343349  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:30.373053  620795 cri.go:89] found id: ""
	I1213 12:05:30.373077  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.373086  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:30.373092  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:30.373149  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:30.401783  620795 cri.go:89] found id: ""
	I1213 12:05:30.401862  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.401879  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:30.401886  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:30.401955  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:30.427557  620795 cri.go:89] found id: ""
	I1213 12:05:30.427580  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.427589  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:30.427595  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:30.427652  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:30.452324  620795 cri.go:89] found id: ""
	I1213 12:05:30.452404  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.452426  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:30.452445  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:30.452538  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:30.485213  620795 cri.go:89] found id: ""
	I1213 12:05:30.485283  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.485307  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:30.485325  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:30.485337  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:30.567099  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:30.571250  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:30.599905  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:30.599987  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:30.671402  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:30.663820    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.664552    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.665833    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.666310    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.667892    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:30.663820    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.664552    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.665833    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.666310    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.667892    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:30.671475  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:30.671544  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:30.700275  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:30.700310  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:33.229307  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:33.240030  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:33.240101  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:33.264516  620795 cri.go:89] found id: ""
	I1213 12:05:33.264540  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.264550  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:33.264557  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:33.264622  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:33.288665  620795 cri.go:89] found id: ""
	I1213 12:05:33.288694  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.288704  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:33.288711  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:33.288772  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:33.318238  620795 cri.go:89] found id: ""
	I1213 12:05:33.318314  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.318338  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:33.318356  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:33.318437  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:33.342548  620795 cri.go:89] found id: ""
	I1213 12:05:33.342582  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.342592  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:33.342598  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:33.342667  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:33.368791  620795 cri.go:89] found id: ""
	I1213 12:05:33.368814  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.368823  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:33.368829  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:33.368887  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:33.395218  620795 cri.go:89] found id: ""
	I1213 12:05:33.395254  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.395263  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:33.395270  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:33.395342  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:33.422228  620795 cri.go:89] found id: ""
	I1213 12:05:33.422263  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.422272  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:33.422279  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:33.422345  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:33.448101  620795 cri.go:89] found id: ""
	I1213 12:05:33.448126  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.448136  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:33.448146  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:33.448164  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:33.513958  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:33.513995  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:33.536519  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:33.536547  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:33.642718  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:33.634504    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.635083    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.636790    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.637471    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.638479    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:33.634504    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.635083    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.636790    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.637471    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.638479    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:33.642742  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:33.642757  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:33.671233  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:33.671268  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:05:34.036325  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:36.536291  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:36.205718  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:36.216490  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:36.216599  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:36.242239  620795 cri.go:89] found id: ""
	I1213 12:05:36.242267  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.242277  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:36.242284  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:36.242345  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:36.267114  620795 cri.go:89] found id: ""
	I1213 12:05:36.267140  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.267149  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:36.267155  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:36.267221  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:36.292484  620795 cri.go:89] found id: ""
	I1213 12:05:36.292510  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.292519  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:36.292525  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:36.292586  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:36.317342  620795 cri.go:89] found id: ""
	I1213 12:05:36.317365  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.317374  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:36.317380  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:36.317442  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:36.346675  620795 cri.go:89] found id: ""
	I1213 12:05:36.346746  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.346770  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:36.346788  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:36.346878  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:36.374350  620795 cri.go:89] found id: ""
	I1213 12:05:36.374416  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.374440  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:36.374459  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:36.374550  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:36.401836  620795 cri.go:89] found id: ""
	I1213 12:05:36.401904  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.401927  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:36.401947  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:36.402023  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:36.436530  620795 cri.go:89] found id: ""
	I1213 12:05:36.436612  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.436635  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:36.436653  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:36.436680  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:36.464595  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:36.464663  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:36.550070  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:36.550121  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:36.581383  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:36.581414  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:36.674763  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:36.666501    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.667311    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.668765    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.669457    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.671114    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:36.666501    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.667311    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.668765    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.669457    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.671114    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:36.674830  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:36.674854  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:39.203663  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:39.214134  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:39.214211  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	W1213 12:05:39.036349  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:41.036401  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:43.037206  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:39.240674  620795 cri.go:89] found id: ""
	I1213 12:05:39.240705  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.240714  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:39.240721  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:39.240786  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:39.265873  620795 cri.go:89] found id: ""
	I1213 12:05:39.265895  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.265903  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:39.265909  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:39.265966  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:39.291928  620795 cri.go:89] found id: ""
	I1213 12:05:39.291952  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.291960  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:39.291978  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:39.292037  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:39.317111  620795 cri.go:89] found id: ""
	I1213 12:05:39.317144  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.317153  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:39.317160  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:39.317219  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:39.341971  620795 cri.go:89] found id: ""
	I1213 12:05:39.341993  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.342002  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:39.342009  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:39.342065  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:39.370095  620795 cri.go:89] found id: ""
	I1213 12:05:39.370166  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.370192  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:39.370212  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:39.370297  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:39.396661  620795 cri.go:89] found id: ""
	I1213 12:05:39.396740  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.396765  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:39.396777  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:39.396855  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:39.426139  620795 cri.go:89] found id: ""
	I1213 12:05:39.426167  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.426177  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:39.426188  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:39.426199  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:39.458970  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:39.459002  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:39.525484  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:39.525523  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:39.554066  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:39.554149  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:39.647487  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:39.639358    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.640049    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.641742    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.642473    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.644045    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:39.639358    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.640049    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.641742    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.642473    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.644045    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:39.647508  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:39.647543  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:42.175675  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:42.189064  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:42.189149  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:42.220105  620795 cri.go:89] found id: ""
	I1213 12:05:42.220135  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.220156  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:42.220164  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:42.220229  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:42.250459  620795 cri.go:89] found id: ""
	I1213 12:05:42.250486  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.250495  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:42.250502  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:42.250570  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:42.278746  620795 cri.go:89] found id: ""
	I1213 12:05:42.278773  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.278785  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:42.278793  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:42.278855  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:42.307046  620795 cri.go:89] found id: ""
	I1213 12:05:42.307073  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.307083  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:42.307092  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:42.307153  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:42.335010  620795 cri.go:89] found id: ""
	I1213 12:05:42.335035  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.335046  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:42.335052  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:42.335114  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:42.362128  620795 cri.go:89] found id: ""
	I1213 12:05:42.362154  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.362163  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:42.362170  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:42.362231  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:42.396146  620795 cri.go:89] found id: ""
	I1213 12:05:42.396175  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.396186  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:42.396193  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:42.396254  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:42.423111  620795 cri.go:89] found id: ""
	I1213 12:05:42.423137  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.423146  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:42.423155  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:42.423167  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:42.440295  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:42.440325  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:42.504038  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:42.496153    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.496984    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.498582    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.499023    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.500536    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:42.496153    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.496984    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.498582    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.499023    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.500536    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:42.504059  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:42.504071  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:42.550928  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:42.550966  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:42.608904  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:42.608935  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:05:45.037527  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:47.536245  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:45.181124  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:45.197731  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:45.197873  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:45.246027  620795 cri.go:89] found id: ""
	I1213 12:05:45.246070  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.246081  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:45.246106  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:45.246220  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:45.279332  620795 cri.go:89] found id: ""
	I1213 12:05:45.279388  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.279398  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:45.279404  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:45.279509  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:45.314910  620795 cri.go:89] found id: ""
	I1213 12:05:45.314988  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.315000  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:45.315010  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:45.315114  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:45.343055  620795 cri.go:89] found id: ""
	I1213 12:05:45.343130  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.343153  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:45.343175  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:45.343282  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:45.370166  620795 cri.go:89] found id: ""
	I1213 12:05:45.370240  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.370275  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:45.370299  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:45.370391  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:45.396456  620795 cri.go:89] found id: ""
	I1213 12:05:45.396480  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.396489  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:45.396495  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:45.396550  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:45.421687  620795 cri.go:89] found id: ""
	I1213 12:05:45.421711  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.421720  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:45.421726  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:45.421781  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:45.446648  620795 cri.go:89] found id: ""
	I1213 12:05:45.446672  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.446681  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:45.446691  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:45.446702  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:45.512020  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:45.512055  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:45.543051  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:45.543084  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:45.640767  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:45.633029    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.633452    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.634983    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.635597    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.637148    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:45.633029    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.633452    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.634983    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.635597    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.637148    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:45.640789  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:45.640802  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:45.670787  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:45.670822  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:48.201632  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:48.211975  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:48.212046  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:48.241331  620795 cri.go:89] found id: ""
	I1213 12:05:48.241355  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.241364  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:48.241371  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:48.241430  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:48.266481  620795 cri.go:89] found id: ""
	I1213 12:05:48.266506  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.266515  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:48.266523  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:48.266581  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:48.292562  620795 cri.go:89] found id: ""
	I1213 12:05:48.292587  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.292597  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:48.292604  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:48.292666  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:48.316829  620795 cri.go:89] found id: ""
	I1213 12:05:48.316853  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.316862  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:48.316869  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:48.316928  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:48.341279  620795 cri.go:89] found id: ""
	I1213 12:05:48.341304  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.341313  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:48.341320  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:48.341395  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:48.370602  620795 cri.go:89] found id: ""
	I1213 12:05:48.370668  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.370684  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:48.370692  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:48.370757  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:48.395975  620795 cri.go:89] found id: ""
	I1213 12:05:48.396001  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.396011  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:48.396017  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:48.396076  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:48.422104  620795 cri.go:89] found id: ""
	I1213 12:05:48.422129  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.422139  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:48.422150  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:48.422163  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:48.487414  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:48.487451  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:48.504893  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:48.504924  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:48.613440  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:48.605194    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.606037    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.607690    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.608269    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.609790    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:48.605194    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.606037    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.607690    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.608269    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.609790    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:48.613472  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:48.613485  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:48.643454  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:48.643496  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:05:49.537116  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:52.036281  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:51.173081  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:51.184091  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:51.184220  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:51.209714  620795 cri.go:89] found id: ""
	I1213 12:05:51.209741  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.209751  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:51.209757  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:51.209815  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:51.236381  620795 cri.go:89] found id: ""
	I1213 12:05:51.236414  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.236423  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:51.236429  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:51.236495  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:51.266394  620795 cri.go:89] found id: ""
	I1213 12:05:51.266428  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.266437  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:51.266443  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:51.266509  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:51.293949  620795 cri.go:89] found id: ""
	I1213 12:05:51.293981  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.293991  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:51.293998  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:51.294062  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:51.324019  620795 cri.go:89] found id: ""
	I1213 12:05:51.324042  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.324056  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:51.324062  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:51.324145  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:51.352992  620795 cri.go:89] found id: ""
	I1213 12:05:51.353023  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.353032  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:51.353039  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:51.353098  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:51.378872  620795 cri.go:89] found id: ""
	I1213 12:05:51.378898  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.378907  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:51.378914  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:51.378976  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:51.406670  620795 cri.go:89] found id: ""
	I1213 12:05:51.406695  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.406703  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:51.406713  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:51.406728  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:51.469269  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:51.461277    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.461921    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.463438    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.463899    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.465468    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:51.461277    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.461921    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.463438    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.463899    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.465468    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:51.469290  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:51.469304  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:51.497318  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:51.497352  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:51.534646  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:51.534680  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:51.618348  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:51.618388  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:54.137197  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:54.147708  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:54.147778  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:54.173064  620795 cri.go:89] found id: ""
	I1213 12:05:54.173089  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.173098  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:54.173105  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:54.173164  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:54.198688  620795 cri.go:89] found id: ""
	I1213 12:05:54.198713  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.198723  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:54.198733  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:54.198789  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:54.224472  620795 cri.go:89] found id: ""
	I1213 12:05:54.224497  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.224506  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:54.224512  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:54.224571  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1213 12:05:54.536956  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:56.537169  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:54.254875  620795 cri.go:89] found id: ""
	I1213 12:05:54.254900  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.254909  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:54.254916  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:54.254985  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:54.286287  620795 cri.go:89] found id: ""
	I1213 12:05:54.286314  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.286322  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:54.286329  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:54.286384  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:54.312009  620795 cri.go:89] found id: ""
	I1213 12:05:54.312034  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.312043  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:54.312050  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:54.312109  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:54.338472  620795 cri.go:89] found id: ""
	I1213 12:05:54.338506  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.338516  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:54.338522  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:54.338590  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:54.363767  620795 cri.go:89] found id: ""
	I1213 12:05:54.363791  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.363799  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:54.363810  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:54.363827  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:54.429426  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:54.429462  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:54.446820  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:54.446859  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:54.514113  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:54.505503    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.506092    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.507709    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.508420    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.510180    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:54.505503    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.506092    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.507709    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.508420    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.510180    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:54.514137  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:54.514150  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:54.547597  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:54.547688  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:57.126156  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:57.136777  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:57.136854  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:57.166084  620795 cri.go:89] found id: ""
	I1213 12:05:57.166107  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.166116  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:57.166122  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:57.166180  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:57.194344  620795 cri.go:89] found id: ""
	I1213 12:05:57.194368  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.194377  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:57.194384  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:57.194445  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:57.220264  620795 cri.go:89] found id: ""
	I1213 12:05:57.220289  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.220298  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:57.220305  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:57.220362  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:57.245200  620795 cri.go:89] found id: ""
	I1213 12:05:57.245222  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.245230  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:57.245236  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:57.245292  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:57.272963  620795 cri.go:89] found id: ""
	I1213 12:05:57.272987  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.272996  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:57.273003  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:57.273061  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:57.297916  620795 cri.go:89] found id: ""
	I1213 12:05:57.297940  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.297947  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:57.297954  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:57.298016  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:57.323201  620795 cri.go:89] found id: ""
	I1213 12:05:57.323226  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.323235  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:57.323241  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:57.323301  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:57.348727  620795 cri.go:89] found id: ""
	I1213 12:05:57.348759  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.348769  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:57.348779  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:57.348794  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:57.424991  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:57.416858    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.417506    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.419207    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.419713    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.421359    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:57.416858    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.417506    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.419207    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.419713    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.421359    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:57.425015  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:57.425027  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:57.454618  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:57.454652  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:57.482599  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:57.482627  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:57.556901  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:57.556982  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:05:58.537235  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:01.037253  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:00.078226  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:00.114729  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:00.114815  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:00.214510  620795 cri.go:89] found id: ""
	I1213 12:06:00.214537  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.214547  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:00.214560  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:00.214644  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:00.283401  620795 cri.go:89] found id: ""
	I1213 12:06:00.283433  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.283443  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:00.283450  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:00.283564  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:00.333853  620795 cri.go:89] found id: ""
	I1213 12:06:00.333946  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.333974  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:00.333999  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:00.334124  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:00.370564  620795 cri.go:89] found id: ""
	I1213 12:06:00.370647  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.370670  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:00.370693  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:00.370796  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:00.400318  620795 cri.go:89] found id: ""
	I1213 12:06:00.400355  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.400365  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:00.400373  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:00.400451  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:00.429349  620795 cri.go:89] found id: ""
	I1213 12:06:00.429376  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.429387  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:00.429394  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:00.429480  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:00.457513  620795 cri.go:89] found id: ""
	I1213 12:06:00.457540  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.457549  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:00.457555  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:00.457617  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:00.484050  620795 cri.go:89] found id: ""
	I1213 12:06:00.484077  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.484086  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:00.484096  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:00.484110  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:00.564314  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:00.564357  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:00.586853  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:00.586884  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:00.678609  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:00.670112    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.670780    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.672403    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.672752    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.674443    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:00.670112    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.670780    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.672403    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.672752    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.674443    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:00.678679  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:00.678699  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:00.708726  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:00.708764  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:03.239868  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:03.250271  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:03.250342  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:03.278221  620795 cri.go:89] found id: ""
	I1213 12:06:03.278246  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.278254  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:03.278261  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:03.278323  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:03.307255  620795 cri.go:89] found id: ""
	I1213 12:06:03.307280  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.307288  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:03.307295  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:03.307358  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:03.334371  620795 cri.go:89] found id: ""
	I1213 12:06:03.334394  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.334402  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:03.334408  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:03.334465  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:03.359920  620795 cri.go:89] found id: ""
	I1213 12:06:03.359947  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.359959  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:03.359966  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:03.360026  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:03.388349  620795 cri.go:89] found id: ""
	I1213 12:06:03.388373  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.388382  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:03.388389  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:03.388446  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:03.413684  620795 cri.go:89] found id: ""
	I1213 12:06:03.413712  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.413721  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:03.413727  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:03.413786  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:03.438590  620795 cri.go:89] found id: ""
	I1213 12:06:03.438613  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.438622  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:03.438629  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:03.438686  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:03.466031  620795 cri.go:89] found id: ""
	I1213 12:06:03.466065  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.466074  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:03.466084  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:03.466095  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:03.540002  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:03.540037  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:03.581254  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:03.581285  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:03.657609  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:03.648962    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.649736    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.651545    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.652112    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.653889    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:03.648962    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.649736    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.651545    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.652112    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.653889    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:03.657641  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:03.657654  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:03.686248  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:03.686284  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:06:03.537138  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:05.537188  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:07.537266  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:06.215254  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:06.226059  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:06.226130  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:06.252206  620795 cri.go:89] found id: ""
	I1213 12:06:06.252229  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.252237  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:06.252243  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:06.252306  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:06.282327  620795 cri.go:89] found id: ""
	I1213 12:06:06.282349  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.282358  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:06.282364  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:06.282425  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:06.312866  620795 cri.go:89] found id: ""
	I1213 12:06:06.312889  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.312898  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:06.312905  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:06.312964  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:06.339757  620795 cri.go:89] found id: ""
	I1213 12:06:06.339828  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.339851  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:06.339865  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:06.339937  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:06.366465  620795 cri.go:89] found id: ""
	I1213 12:06:06.366491  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.366508  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:06.366515  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:06.366589  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:06.395704  620795 cri.go:89] found id: ""
	I1213 12:06:06.395727  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.395735  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:06.395742  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:06.395800  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:06.420941  620795 cri.go:89] found id: ""
	I1213 12:06:06.420966  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.420974  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:06.420981  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:06.421040  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:06.446747  620795 cri.go:89] found id: ""
	I1213 12:06:06.446771  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.446781  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:06.446790  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:06.446802  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:06.515396  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:06.515437  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:06.537368  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:06.537458  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:06.638118  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:06.626710    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.630084    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.630705    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.632330    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.632805    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:06.626710    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.630084    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.630705    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.632330    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.632805    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:06.638202  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:06.638230  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:06.668749  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:06.668789  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:09.204205  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:09.214694  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:09.214763  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	W1213 12:06:10.037386  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:12.536953  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:09.240252  620795 cri.go:89] found id: ""
	I1213 12:06:09.240291  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.240301  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:09.240307  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:09.240372  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:09.267161  620795 cri.go:89] found id: ""
	I1213 12:06:09.267188  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.267197  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:09.267203  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:09.267263  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:09.292472  620795 cri.go:89] found id: ""
	I1213 12:06:09.292501  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.292510  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:09.292517  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:09.292581  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:09.317718  620795 cri.go:89] found id: ""
	I1213 12:06:09.317745  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.317754  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:09.317760  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:09.317819  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:09.342979  620795 cri.go:89] found id: ""
	I1213 12:06:09.343006  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.343015  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:09.343021  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:09.343080  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:09.370344  620795 cri.go:89] found id: ""
	I1213 12:06:09.370368  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.370377  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:09.370383  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:09.370441  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:09.397428  620795 cri.go:89] found id: ""
	I1213 12:06:09.397451  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.397461  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:09.397467  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:09.397527  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:09.422862  620795 cri.go:89] found id: ""
	I1213 12:06:09.422890  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.422900  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:09.422909  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:09.422923  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:09.486031  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:09.478519    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.478948    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.480477    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.480972    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.482466    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:09.478519    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.478948    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.480477    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.480972    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.482466    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:09.486057  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:09.486070  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:09.514736  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:09.514772  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:09.586482  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:09.586558  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:09.660422  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:09.660459  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:12.179299  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:12.190230  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:12.190302  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:12.216052  620795 cri.go:89] found id: ""
	I1213 12:06:12.216076  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.216085  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:12.216092  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:12.216150  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:12.245417  620795 cri.go:89] found id: ""
	I1213 12:06:12.245443  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.245453  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:12.245460  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:12.245525  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:12.272357  620795 cri.go:89] found id: ""
	I1213 12:06:12.272382  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.272391  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:12.272397  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:12.272459  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:12.297431  620795 cri.go:89] found id: ""
	I1213 12:06:12.297458  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.297467  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:12.297479  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:12.297537  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:12.322773  620795 cri.go:89] found id: ""
	I1213 12:06:12.322796  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.322805  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:12.322829  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:12.322894  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:12.348212  620795 cri.go:89] found id: ""
	I1213 12:06:12.348278  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.348293  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:12.348301  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:12.348360  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:12.378078  620795 cri.go:89] found id: ""
	I1213 12:06:12.378105  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.378115  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:12.378122  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:12.378186  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:12.403938  620795 cri.go:89] found id: ""
	I1213 12:06:12.404005  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.404029  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:12.404044  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:12.404056  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:12.432395  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:12.432433  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:12.465021  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:12.465055  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:12.533527  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:12.533564  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:12.557847  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:12.557876  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:12.649280  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:12.641558    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.641947    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.643630    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.644072    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.645646    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:12.641558    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.641947    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.643630    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.644072    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.645646    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:06:15.036244  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:17.037163  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:15.150199  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:15.161093  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:15.161164  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:15.188375  620795 cri.go:89] found id: ""
	I1213 12:06:15.188402  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.188411  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:15.188420  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:15.188494  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:15.213569  620795 cri.go:89] found id: ""
	I1213 12:06:15.213592  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.213601  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:15.213607  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:15.213667  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:15.244468  620795 cri.go:89] found id: ""
	I1213 12:06:15.244490  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.244499  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:15.244505  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:15.244565  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:15.269446  620795 cri.go:89] found id: ""
	I1213 12:06:15.269469  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.269478  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:15.269484  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:15.269544  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:15.297921  620795 cri.go:89] found id: ""
	I1213 12:06:15.297947  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.297957  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:15.297965  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:15.298029  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:15.323225  620795 cri.go:89] found id: ""
	I1213 12:06:15.323248  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.323256  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:15.323263  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:15.323322  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:15.349965  620795 cri.go:89] found id: ""
	I1213 12:06:15.349988  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.349999  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:15.350005  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:15.350067  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:15.378207  620795 cri.go:89] found id: ""
	I1213 12:06:15.378236  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.378247  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:15.378258  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:15.378271  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:15.443150  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:15.443182  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:15.459353  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:15.459388  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:15.546545  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:15.517236    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.519883    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.520609    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.528433    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.536550    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:15.517236    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.519883    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.520609    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.528433    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.536550    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:15.546611  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:15.546638  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:15.582173  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:15.582258  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:18.126037  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:18.137115  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:18.137190  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:18.164991  620795 cri.go:89] found id: ""
	I1213 12:06:18.165017  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.165026  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:18.165033  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:18.165092  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:18.191806  620795 cri.go:89] found id: ""
	I1213 12:06:18.191832  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.191841  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:18.191848  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:18.191906  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:18.222284  620795 cri.go:89] found id: ""
	I1213 12:06:18.222310  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.222320  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:18.222329  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:18.222389  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:18.250305  620795 cri.go:89] found id: ""
	I1213 12:06:18.250332  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.250342  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:18.250348  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:18.250406  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:18.276798  620795 cri.go:89] found id: ""
	I1213 12:06:18.276823  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.276833  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:18.276841  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:18.276901  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:18.301916  620795 cri.go:89] found id: ""
	I1213 12:06:18.301943  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.301952  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:18.301959  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:18.302017  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:18.327545  620795 cri.go:89] found id: ""
	I1213 12:06:18.327569  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.327577  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:18.327584  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:18.327681  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:18.352817  620795 cri.go:89] found id: ""
	I1213 12:06:18.352844  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.352854  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:18.352863  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:18.352902  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:18.418564  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:18.418601  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:18.434897  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:18.434928  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:18.499340  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:18.490649    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.491423    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.492978    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.493531    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.495112    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:18.490649    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.491423    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.492978    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.493531    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.495112    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:18.499366  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:18.499380  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:18.528897  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:18.528980  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:06:19.537261  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:22.037303  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:21.104122  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:21.114671  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:21.114786  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:21.140990  620795 cri.go:89] found id: ""
	I1213 12:06:21.141014  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.141024  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:21.141030  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:21.141087  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:21.168480  620795 cri.go:89] found id: ""
	I1213 12:06:21.168510  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.168519  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:21.168526  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:21.168583  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:21.193893  620795 cri.go:89] found id: ""
	I1213 12:06:21.193916  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.193924  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:21.193930  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:21.193985  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:21.222789  620795 cri.go:89] found id: ""
	I1213 12:06:21.222811  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.222820  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:21.222827  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:21.222885  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:21.254379  620795 cri.go:89] found id: ""
	I1213 12:06:21.254402  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.254411  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:21.254417  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:21.254476  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:21.280020  620795 cri.go:89] found id: ""
	I1213 12:06:21.280049  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.280058  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:21.280065  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:21.280123  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:21.305920  620795 cri.go:89] found id: ""
	I1213 12:06:21.305942  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.305952  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:21.305957  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:21.306031  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:21.334376  620795 cri.go:89] found id: ""
	I1213 12:06:21.334400  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.334409  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:21.334417  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:21.334429  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:21.362868  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:21.362906  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:21.397678  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:21.397727  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:21.465535  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:21.465574  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:21.482417  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:21.482443  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:21.566636  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:21.557499    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.558882    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.559834    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.561441    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.561752    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:21.557499    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.558882    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.559834    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.561441    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.561752    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:24.068339  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:24.079607  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:24.079684  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:24.105575  620795 cri.go:89] found id: ""
	I1213 12:06:24.105609  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.105619  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:24.105626  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:24.105696  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:24.131798  620795 cri.go:89] found id: ""
	I1213 12:06:24.131830  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.131840  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:24.131846  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:24.131905  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:24.157068  620795 cri.go:89] found id: ""
	I1213 12:06:24.157096  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.157106  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:24.157113  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:24.157168  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:24.186737  620795 cri.go:89] found id: ""
	I1213 12:06:24.186762  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.186772  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:24.186779  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:24.186843  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:24.214700  620795 cri.go:89] found id: ""
	I1213 12:06:24.214726  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.214745  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:24.214751  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:24.214815  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	W1213 12:06:24.537013  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:27.037104  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:24.242048  620795 cri.go:89] found id: ""
	I1213 12:06:24.242074  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.242083  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:24.242090  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:24.242180  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:24.270953  620795 cri.go:89] found id: ""
	I1213 12:06:24.270978  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.270987  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:24.270994  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:24.271074  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:24.296220  620795 cri.go:89] found id: ""
	I1213 12:06:24.296246  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.296256  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:24.296267  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:24.296278  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:24.325330  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:24.325367  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:24.355217  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:24.355255  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:24.421526  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:24.421566  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:24.438978  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:24.439012  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:24.514169  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:24.505564    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.506202    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.507961    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.508730    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.510229    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:24.505564    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.506202    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.507961    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.508730    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.510229    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:27.015192  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:27.026779  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:27.026871  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:27.054321  620795 cri.go:89] found id: ""
	I1213 12:06:27.054347  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.054357  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:27.054364  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:27.054423  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:27.084443  620795 cri.go:89] found id: ""
	I1213 12:06:27.084467  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.084476  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:27.084482  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:27.084542  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:27.110224  620795 cri.go:89] found id: ""
	I1213 12:06:27.110251  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.110260  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:27.110267  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:27.110326  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:27.141821  620795 cri.go:89] found id: ""
	I1213 12:06:27.141847  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.141857  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:27.141863  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:27.141953  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:27.168110  620795 cri.go:89] found id: ""
	I1213 12:06:27.168143  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.168153  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:27.168160  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:27.168228  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:27.193708  620795 cri.go:89] found id: ""
	I1213 12:06:27.193775  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.193791  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:27.193802  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:27.193862  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:27.220542  620795 cri.go:89] found id: ""
	I1213 12:06:27.220569  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.220578  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:27.220585  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:27.220673  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:27.248536  620795 cri.go:89] found id: ""
	I1213 12:06:27.248614  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.248630  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:27.248641  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:27.248653  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:27.314354  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:27.314389  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:27.331795  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:27.331824  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:27.397269  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:27.389020    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.389779    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.391484    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.391978    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.393471    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:27.389020    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.389779    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.391484    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.391978    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.393471    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:27.397290  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:27.397303  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:27.425995  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:27.426034  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:06:29.537185  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:32.037043  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:29.964336  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:29.975190  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:29.975264  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:30.020235  620795 cri.go:89] found id: ""
	I1213 12:06:30.020330  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.020353  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:30.020373  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:30.020492  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:30.064384  620795 cri.go:89] found id: ""
	I1213 12:06:30.064422  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.064431  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:30.064438  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:30.064537  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:30.093930  620795 cri.go:89] found id: ""
	I1213 12:06:30.093974  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.094003  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:30.094018  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:30.094092  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:30.121799  620795 cri.go:89] found id: ""
	I1213 12:06:30.121830  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.121846  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:30.121854  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:30.121994  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:30.150127  620795 cri.go:89] found id: ""
	I1213 12:06:30.150153  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.150163  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:30.150170  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:30.150232  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:30.177848  620795 cri.go:89] found id: ""
	I1213 12:06:30.177873  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.177883  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:30.177889  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:30.177948  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:30.204179  620795 cri.go:89] found id: ""
	I1213 12:06:30.204216  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.204225  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:30.204235  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:30.204295  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:30.230625  620795 cri.go:89] found id: ""
	I1213 12:06:30.230653  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.230663  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:30.230673  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:30.230685  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:30.297598  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:30.297634  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:30.314962  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:30.314993  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:30.380114  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:30.371745    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.372555    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.374185    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.374477    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.376001    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:30.371745    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.372555    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.374185    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.374477    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.376001    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:30.380136  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:30.380148  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:30.408485  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:30.408523  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:32.936773  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:32.947334  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:32.947408  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:32.974265  620795 cri.go:89] found id: ""
	I1213 12:06:32.974291  620795 logs.go:282] 0 containers: []
	W1213 12:06:32.974300  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:32.974307  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:32.974365  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:33.005585  620795 cri.go:89] found id: ""
	I1213 12:06:33.005616  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.005627  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:33.005633  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:33.005704  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:33.036036  620795 cri.go:89] found id: ""
	I1213 12:06:33.036058  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.036072  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:33.036079  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:33.036136  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:33.062415  620795 cri.go:89] found id: ""
	I1213 12:06:33.062439  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.062448  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:33.062455  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:33.062515  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:33.091004  620795 cri.go:89] found id: ""
	I1213 12:06:33.091072  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.091095  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:33.091115  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:33.091193  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:33.116964  620795 cri.go:89] found id: ""
	I1213 12:06:33.116989  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.116999  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:33.117005  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:33.117084  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:33.143886  620795 cri.go:89] found id: ""
	I1213 12:06:33.143908  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.143918  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:33.143924  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:33.143984  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:33.177672  620795 cri.go:89] found id: ""
	I1213 12:06:33.177697  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.177707  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:33.177716  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:33.177728  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:33.194235  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:33.194266  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:33.258679  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:33.250574    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.251172    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.252678    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.253209    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.254656    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:33.250574    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.251172    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.252678    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.253209    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.254656    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:33.258703  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:33.258715  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:33.287694  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:33.287731  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:33.319142  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:33.319168  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:06:34.037106  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:36.037218  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:35.883653  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:35.894470  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:35.894540  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:35.922164  620795 cri.go:89] found id: ""
	I1213 12:06:35.922243  620795 logs.go:282] 0 containers: []
	W1213 12:06:35.922268  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:35.922286  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:35.922378  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:35.948794  620795 cri.go:89] found id: ""
	I1213 12:06:35.948824  620795 logs.go:282] 0 containers: []
	W1213 12:06:35.948833  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:35.948840  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:35.948916  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:35.976985  620795 cri.go:89] found id: ""
	I1213 12:06:35.977012  620795 logs.go:282] 0 containers: []
	W1213 12:06:35.977023  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:35.977030  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:35.977097  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:36.008179  620795 cri.go:89] found id: ""
	I1213 12:06:36.008210  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.008221  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:36.008229  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:36.008306  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:36.037414  620795 cri.go:89] found id: ""
	I1213 12:06:36.037434  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.037442  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:36.037448  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:36.037505  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:36.066253  620795 cri.go:89] found id: ""
	I1213 12:06:36.066290  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.066304  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:36.066319  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:36.066394  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:36.093841  620795 cri.go:89] found id: ""
	I1213 12:06:36.093938  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.093955  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:36.093963  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:36.094042  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:36.119692  620795 cri.go:89] found id: ""
	I1213 12:06:36.119728  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.119737  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:36.119747  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:36.119761  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:36.136247  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:36.136322  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:36.202464  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:36.194729    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.195344    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.196865    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.197429    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.198995    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:36.194729    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.195344    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.196865    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.197429    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.198995    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:36.202486  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:36.202500  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:36.230571  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:36.230606  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:36.257928  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:36.257955  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:38.826068  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:38.841833  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:38.841915  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:38.871763  620795 cri.go:89] found id: ""
	I1213 12:06:38.871788  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.871797  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:38.871803  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:38.871870  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:38.897931  620795 cri.go:89] found id: ""
	I1213 12:06:38.897956  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.897966  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:38.897972  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:38.898064  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:38.928095  620795 cri.go:89] found id: ""
	I1213 12:06:38.928121  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.928131  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:38.928138  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:38.928202  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:38.954066  620795 cri.go:89] found id: ""
	I1213 12:06:38.954090  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.954098  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:38.954105  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:38.954168  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:38.978723  620795 cri.go:89] found id: ""
	I1213 12:06:38.978752  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.978762  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:38.978769  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:38.978825  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:39.006341  620795 cri.go:89] found id: ""
	I1213 12:06:39.006374  620795 logs.go:282] 0 containers: []
	W1213 12:06:39.006383  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:39.006390  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:39.006462  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:39.032585  620795 cri.go:89] found id: ""
	I1213 12:06:39.032612  620795 logs.go:282] 0 containers: []
	W1213 12:06:39.032622  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:39.032629  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:39.032699  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:39.061395  620795 cri.go:89] found id: ""
	I1213 12:06:39.061426  620795 logs.go:282] 0 containers: []
	W1213 12:06:39.061436  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:39.061446  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:39.061457  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:39.091343  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:39.091367  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:39.160940  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:39.160987  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:39.177451  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:39.177490  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:38.536279  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:40.537278  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:43.037128  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:39.246489  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:39.238660    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.239263    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.241330    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.241646    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.243151    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:39.238660    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.239263    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.241330    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.241646    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.243151    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:39.246510  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:39.246524  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:41.775639  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:41.794476  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:41.794600  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:41.831000  620795 cri.go:89] found id: ""
	I1213 12:06:41.831074  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.831102  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:41.831121  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:41.831203  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:41.872779  620795 cri.go:89] found id: ""
	I1213 12:06:41.872806  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.872816  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:41.872823  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:41.872903  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:41.902394  620795 cri.go:89] found id: ""
	I1213 12:06:41.902420  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.902429  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:41.902435  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:41.902494  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:41.929459  620795 cri.go:89] found id: ""
	I1213 12:06:41.929485  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.929494  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:41.929501  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:41.929563  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:41.955676  620795 cri.go:89] found id: ""
	I1213 12:06:41.955700  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.955716  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:41.955724  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:41.955783  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:41.981839  620795 cri.go:89] found id: ""
	I1213 12:06:41.981865  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.981875  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:41.981882  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:41.981939  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:42.021720  620795 cri.go:89] found id: ""
	I1213 12:06:42.021808  620795 logs.go:282] 0 containers: []
	W1213 12:06:42.021827  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:42.021836  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:42.021908  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:42.052304  620795 cri.go:89] found id: ""
	I1213 12:06:42.052332  620795 logs.go:282] 0 containers: []
	W1213 12:06:42.052341  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:42.052351  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:42.052382  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:42.071214  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:42.071250  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:42.151103  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:42.141536    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.142506    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.144362    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.144822    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.146635    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:42.141536    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.142506    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.144362    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.144822    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.146635    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:42.151127  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:42.151146  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:42.183473  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:42.183646  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:42.226797  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:42.226834  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:06:45.037308  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:47.537265  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:44.796943  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:44.821281  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:44.821413  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:44.863598  620795 cri.go:89] found id: ""
	I1213 12:06:44.863672  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.863697  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:44.863718  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:44.863805  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:44.892309  620795 cri.go:89] found id: ""
	I1213 12:06:44.892395  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.892418  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:44.892438  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:44.892552  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:44.918444  620795 cri.go:89] found id: ""
	I1213 12:06:44.918522  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.918557  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:44.918581  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:44.918673  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:44.944223  620795 cri.go:89] found id: ""
	I1213 12:06:44.944249  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.944258  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:44.944265  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:44.944327  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:44.970515  620795 cri.go:89] found id: ""
	I1213 12:06:44.970548  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.970559  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:44.970566  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:44.970626  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:44.996938  620795 cri.go:89] found id: ""
	I1213 12:06:44.996966  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.996976  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:44.996983  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:44.997050  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:45.050971  620795 cri.go:89] found id: ""
	I1213 12:06:45.051001  620795 logs.go:282] 0 containers: []
	W1213 12:06:45.051020  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:45.051028  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:45.051107  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:45.095037  620795 cri.go:89] found id: ""
	I1213 12:06:45.095076  620795 logs.go:282] 0 containers: []
	W1213 12:06:45.095087  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:45.095098  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:45.095116  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:45.209528  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:45.209618  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:45.240275  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:45.240311  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:45.322872  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:45.312425    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.313157    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.314727    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.315938    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.316890    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:45.312425    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.313157    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.314727    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.315938    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.316890    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:45.322895  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:45.322909  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:45.353126  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:45.353162  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:47.883672  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:47.894317  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:47.894394  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:47.920883  620795 cri.go:89] found id: ""
	I1213 12:06:47.920909  620795 logs.go:282] 0 containers: []
	W1213 12:06:47.920919  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:47.920927  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:47.920985  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:47.947168  620795 cri.go:89] found id: ""
	I1213 12:06:47.947197  620795 logs.go:282] 0 containers: []
	W1213 12:06:47.947207  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:47.947214  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:47.947279  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:47.972678  620795 cri.go:89] found id: ""
	I1213 12:06:47.972701  620795 logs.go:282] 0 containers: []
	W1213 12:06:47.972710  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:47.972717  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:47.972779  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:48.010849  620795 cri.go:89] found id: ""
	I1213 12:06:48.010915  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.010939  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:48.010961  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:48.011038  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:48.040005  620795 cri.go:89] found id: ""
	I1213 12:06:48.040074  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.040098  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:48.040118  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:48.040211  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:48.067778  620795 cri.go:89] found id: ""
	I1213 12:06:48.067806  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.067815  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:48.067822  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:48.067884  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:48.096165  620795 cri.go:89] found id: ""
	I1213 12:06:48.096207  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.096218  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:48.096224  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:48.096297  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:48.123725  620795 cri.go:89] found id: ""
	I1213 12:06:48.123761  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.123771  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:48.123781  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:48.123793  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:48.153693  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:48.153733  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:48.185148  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:48.185227  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:48.251689  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:48.251724  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:48.269048  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:48.269079  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:48.336435  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:48.328704    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.329312    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.330862    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.331331    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.332839    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:48.328704    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.329312    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.330862    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.331331    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.332839    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:06:50.037084  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:52.037310  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:50.836744  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:50.848522  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:50.848593  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:50.874981  620795 cri.go:89] found id: ""
	I1213 12:06:50.875065  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.875088  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:50.875108  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:50.875219  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:50.900176  620795 cri.go:89] found id: ""
	I1213 12:06:50.900203  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.900213  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:50.900219  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:50.900277  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:50.929844  620795 cri.go:89] found id: ""
	I1213 12:06:50.929869  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.929878  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:50.929885  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:50.929943  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:50.955008  620795 cri.go:89] found id: ""
	I1213 12:06:50.955033  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.955042  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:50.955049  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:50.955104  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:50.982109  620795 cri.go:89] found id: ""
	I1213 12:06:50.982134  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.982143  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:50.982149  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:50.982211  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:51.013066  620795 cri.go:89] found id: ""
	I1213 12:06:51.013144  620795 logs.go:282] 0 containers: []
	W1213 12:06:51.013160  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:51.013168  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:51.013236  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:51.042207  620795 cri.go:89] found id: ""
	I1213 12:06:51.042233  620795 logs.go:282] 0 containers: []
	W1213 12:06:51.042243  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:51.042250  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:51.042315  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:51.068089  620795 cri.go:89] found id: ""
	I1213 12:06:51.068116  620795 logs.go:282] 0 containers: []
	W1213 12:06:51.068125  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:51.068135  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:51.068146  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:51.136510  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:51.136550  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:51.153539  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:51.153567  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:51.227168  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:51.219231    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.219823    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.221668    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.222081    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.223742    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:51.219231    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.219823    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.221668    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.222081    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.223742    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:51.227240  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:51.227271  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:51.256505  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:51.256541  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:53.786599  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:53.808412  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:53.808498  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:53.866097  620795 cri.go:89] found id: ""
	I1213 12:06:53.866124  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.866133  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:53.866140  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:53.866197  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:53.896398  620795 cri.go:89] found id: ""
	I1213 12:06:53.896426  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.896435  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:53.896442  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:53.896499  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:53.922228  620795 cri.go:89] found id: ""
	I1213 12:06:53.922255  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.922265  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:53.922271  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:53.922333  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:53.947081  620795 cri.go:89] found id: ""
	I1213 12:06:53.947107  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.947116  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:53.947123  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:53.947177  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:53.972340  620795 cri.go:89] found id: ""
	I1213 12:06:53.972365  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.972374  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:53.972381  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:53.972437  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:54.000806  620795 cri.go:89] found id: ""
	I1213 12:06:54.000835  620795 logs.go:282] 0 containers: []
	W1213 12:06:54.000844  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:54.000851  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:54.000925  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:54.030584  620795 cri.go:89] found id: ""
	I1213 12:06:54.030617  620795 logs.go:282] 0 containers: []
	W1213 12:06:54.030626  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:54.030648  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:54.030734  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:54.056807  620795 cri.go:89] found id: ""
	I1213 12:06:54.056833  620795 logs.go:282] 0 containers: []
	W1213 12:06:54.056842  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:54.056877  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:54.056897  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:54.122299  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:54.122347  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:54.139911  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:54.139944  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:54.202433  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:54.194761    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.195486    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.197123    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.197444    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.198946    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:54.194761    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.195486    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.197123    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.197444    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.198946    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:54.202453  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:54.202466  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:54.230939  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:54.230977  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:06:54.536621  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:56.537197  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:56.761244  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:56.773199  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:56.773280  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:56.833295  620795 cri.go:89] found id: ""
	I1213 12:06:56.833323  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.833338  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:56.833345  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:56.833410  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:56.877141  620795 cri.go:89] found id: ""
	I1213 12:06:56.877179  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.877189  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:56.877195  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:56.877255  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:56.909304  620795 cri.go:89] found id: ""
	I1213 12:06:56.909329  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.909337  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:56.909344  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:56.909402  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:56.937175  620795 cri.go:89] found id: ""
	I1213 12:06:56.937206  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.937215  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:56.937222  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:56.937283  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:56.962816  620795 cri.go:89] found id: ""
	I1213 12:06:56.962839  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.962848  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:56.962854  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:56.962909  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:56.988340  620795 cri.go:89] found id: ""
	I1213 12:06:56.988364  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.988372  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:56.988379  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:56.988438  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:57.014873  620795 cri.go:89] found id: ""
	I1213 12:06:57.014956  620795 logs.go:282] 0 containers: []
	W1213 12:06:57.014979  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:57.014997  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:57.015107  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:57.042222  620795 cri.go:89] found id: ""
	I1213 12:06:57.042295  620795 logs.go:282] 0 containers: []
	W1213 12:06:57.042331  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:57.042357  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:57.042383  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:57.070110  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:57.070148  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:57.097788  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:57.097812  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:57.164029  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:57.164067  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:57.182586  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:57.182619  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:57.253568  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:57.245349    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.246144    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.247745    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.248303    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.249920    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:57.245349    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.246144    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.247745    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.248303    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.249920    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:06:59.037110  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:01.537092  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:59.753877  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:59.764872  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:59.764943  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:59.794978  620795 cri.go:89] found id: ""
	I1213 12:06:59.795002  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.795016  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:59.795027  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:59.795086  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:59.832235  620795 cri.go:89] found id: ""
	I1213 12:06:59.832264  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.832276  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:59.832283  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:59.832342  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:59.879189  620795 cri.go:89] found id: ""
	I1213 12:06:59.879217  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.879227  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:59.879233  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:59.879296  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:59.906738  620795 cri.go:89] found id: ""
	I1213 12:06:59.906766  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.906775  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:59.906782  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:59.906838  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:59.934746  620795 cri.go:89] found id: ""
	I1213 12:06:59.934774  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.934783  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:59.934790  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:59.934852  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:59.962016  620795 cri.go:89] found id: ""
	I1213 12:06:59.962049  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.962059  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:59.962066  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:59.962123  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:59.988024  620795 cri.go:89] found id: ""
	I1213 12:06:59.988047  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.988056  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:59.988062  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:59.988118  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:00.062022  620795 cri.go:89] found id: ""
	I1213 12:07:00.062049  620795 logs.go:282] 0 containers: []
	W1213 12:07:00.062059  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:00.062076  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:00.062094  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:00.179599  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:00.181365  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:00.211914  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:00.211958  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:00.303311  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:00.290980    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.291674    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.293924    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.295005    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.295928    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:00.290980    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.291674    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.293924    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.295005    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.295928    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:00.303333  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:00.303347  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:00.339996  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:00.340039  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:02.882696  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:02.898926  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:02.899000  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:02.928919  620795 cri.go:89] found id: ""
	I1213 12:07:02.928949  620795 logs.go:282] 0 containers: []
	W1213 12:07:02.928959  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:02.928967  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:02.929030  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:02.955168  620795 cri.go:89] found id: ""
	I1213 12:07:02.955194  620795 logs.go:282] 0 containers: []
	W1213 12:07:02.955209  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:02.955215  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:02.955273  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:02.984105  620795 cri.go:89] found id: ""
	I1213 12:07:02.984132  620795 logs.go:282] 0 containers: []
	W1213 12:07:02.984141  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:02.984159  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:02.984220  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:03.011185  620795 cri.go:89] found id: ""
	I1213 12:07:03.011210  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.011219  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:03.011227  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:03.011289  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:03.038557  620795 cri.go:89] found id: ""
	I1213 12:07:03.038580  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.038588  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:03.038594  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:03.038656  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:03.064610  620795 cri.go:89] found id: ""
	I1213 12:07:03.064650  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.064661  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:03.064667  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:03.064725  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:03.090406  620795 cri.go:89] found id: ""
	I1213 12:07:03.090432  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.090441  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:03.090447  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:03.090506  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:03.117733  620795 cri.go:89] found id: ""
	I1213 12:07:03.117761  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.117770  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:03.117780  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:03.117792  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:03.185975  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:03.177634    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.178390    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.180015    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.180554    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.182089    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:03.177634    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.178390    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.180015    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.180554    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.182089    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:03.185999  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:03.186011  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:03.214353  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:03.214387  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:03.244844  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:03.244873  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:03.310569  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:03.310608  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:07:04.037144  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:06.537015  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:05.828010  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:05.840499  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:05.840570  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:05.867194  620795 cri.go:89] found id: ""
	I1213 12:07:05.867272  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.867295  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:05.867314  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:05.867394  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:05.894013  620795 cri.go:89] found id: ""
	I1213 12:07:05.894044  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.894054  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:05.894061  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:05.894126  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:05.920207  620795 cri.go:89] found id: ""
	I1213 12:07:05.920234  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.920244  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:05.920250  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:05.920309  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:05.948255  620795 cri.go:89] found id: ""
	I1213 12:07:05.948280  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.948289  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:05.948295  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:05.948352  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:05.975137  620795 cri.go:89] found id: ""
	I1213 12:07:05.975162  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.975211  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:05.975222  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:05.975283  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:06.006992  620795 cri.go:89] found id: ""
	I1213 12:07:06.007020  620795 logs.go:282] 0 containers: []
	W1213 12:07:06.007030  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:06.007037  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:06.007106  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:06.035032  620795 cri.go:89] found id: ""
	I1213 12:07:06.035067  620795 logs.go:282] 0 containers: []
	W1213 12:07:06.035077  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:06.035084  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:06.035157  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:06.066833  620795 cri.go:89] found id: ""
	I1213 12:07:06.066865  620795 logs.go:282] 0 containers: []
	W1213 12:07:06.066875  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:06.066885  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:06.066899  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:06.134254  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:06.125473    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.125887    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.127536    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.128260    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.129881    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:06.125473    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.125887    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.127536    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.128260    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.129881    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:06.134284  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:06.134297  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:06.163816  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:06.163852  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:06.194055  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:06.194084  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:06.262450  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:06.262550  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:08.779798  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:08.793568  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:08.793654  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:08.848358  620795 cri.go:89] found id: ""
	I1213 12:07:08.848399  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.848408  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:08.848415  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:08.848485  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:08.881239  620795 cri.go:89] found id: ""
	I1213 12:07:08.881268  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.881278  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:08.881284  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:08.881358  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:08.912007  620795 cri.go:89] found id: ""
	I1213 12:07:08.912038  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.912059  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:08.912070  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:08.912143  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:08.948718  620795 cri.go:89] found id: ""
	I1213 12:07:08.948744  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.948754  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:08.948760  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:08.948815  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:08.974195  620795 cri.go:89] found id: ""
	I1213 12:07:08.974224  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.974234  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:08.974240  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:08.974298  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:09.000368  620795 cri.go:89] found id: ""
	I1213 12:07:09.000409  620795 logs.go:282] 0 containers: []
	W1213 12:07:09.000420  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:09.000428  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:09.000500  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:09.027504  620795 cri.go:89] found id: ""
	I1213 12:07:09.027539  620795 logs.go:282] 0 containers: []
	W1213 12:07:09.027548  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:09.027554  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:09.027611  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:09.052844  620795 cri.go:89] found id: ""
	I1213 12:07:09.052870  620795 logs.go:282] 0 containers: []
	W1213 12:07:09.052879  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:09.052888  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:09.052899  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:09.080443  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:09.080483  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:09.109721  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:09.109747  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:09.174545  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:09.174581  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:09.192943  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:09.192974  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:09.036994  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:11.537211  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:09.256162  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:09.248263    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.248774    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.250435    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.251054    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.252736    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:09.248263    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.248774    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.250435    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.251054    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.252736    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:11.756459  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:11.766714  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:11.766784  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:11.797701  620795 cri.go:89] found id: ""
	I1213 12:07:11.797728  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.797737  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:11.797753  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:11.797832  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:11.833489  620795 cri.go:89] found id: ""
	I1213 12:07:11.833563  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.833585  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:11.833604  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:11.833692  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:11.869283  620795 cri.go:89] found id: ""
	I1213 12:07:11.869305  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.869314  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:11.869320  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:11.869376  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:11.899820  620795 cri.go:89] found id: ""
	I1213 12:07:11.899845  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.899855  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:11.899862  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:11.899925  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:11.926125  620795 cri.go:89] found id: ""
	I1213 12:07:11.926150  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.926159  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:11.926166  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:11.926224  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:11.952049  620795 cri.go:89] found id: ""
	I1213 12:07:11.952131  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.952165  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:11.952178  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:11.952250  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:11.982382  620795 cri.go:89] found id: ""
	I1213 12:07:11.982407  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.982415  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:11.982421  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:11.982494  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:12.014887  620795 cri.go:89] found id: ""
	I1213 12:07:12.014912  620795 logs.go:282] 0 containers: []
	W1213 12:07:12.014921  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:12.014931  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:12.014943  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:12.080370  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:12.080407  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:12.097493  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:12.097534  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:12.163658  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:12.155544    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.156277    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.157926    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.158224    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.159755    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:12.155544    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.156277    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.157926    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.158224    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.159755    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:12.163680  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:12.163692  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:12.192505  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:12.192544  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:07:14.037223  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:16.537169  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:14.721085  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:14.731999  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:14.732070  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:14.758997  620795 cri.go:89] found id: ""
	I1213 12:07:14.759023  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.759032  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:14.759039  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:14.759098  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:14.831264  620795 cri.go:89] found id: ""
	I1213 12:07:14.831294  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.831303  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:14.831310  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:14.831366  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:14.882934  620795 cri.go:89] found id: ""
	I1213 12:07:14.882964  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.882973  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:14.882980  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:14.883040  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:14.916858  620795 cri.go:89] found id: ""
	I1213 12:07:14.916888  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.916898  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:14.916905  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:14.916969  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:14.942297  620795 cri.go:89] found id: ""
	I1213 12:07:14.942334  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.942343  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:14.942355  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:14.942431  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:14.967905  620795 cri.go:89] found id: ""
	I1213 12:07:14.967927  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.967936  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:14.967942  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:14.968000  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:14.993041  620795 cri.go:89] found id: ""
	I1213 12:07:14.993107  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.993131  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:14.993145  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:14.993224  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:15.027730  620795 cri.go:89] found id: ""
	I1213 12:07:15.027755  620795 logs.go:282] 0 containers: []
	W1213 12:07:15.027765  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:15.027776  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:15.027789  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:15.095470  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:15.095507  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:15.113485  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:15.113567  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:15.183456  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:15.174486    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.175343    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.177179    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.177821    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.179398    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:15.174486    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.175343    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.177179    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.177821    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.179398    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:15.183481  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:15.183497  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:15.212670  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:15.212706  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:17.745028  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:17.755868  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:17.755965  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:17.830528  620795 cri.go:89] found id: ""
	I1213 12:07:17.830551  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.830559  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:17.830585  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:17.830654  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:17.866003  620795 cri.go:89] found id: ""
	I1213 12:07:17.866029  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.866038  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:17.866044  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:17.866102  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:17.891564  620795 cri.go:89] found id: ""
	I1213 12:07:17.891588  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.891597  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:17.891603  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:17.891664  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:17.918740  620795 cri.go:89] found id: ""
	I1213 12:07:17.918768  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.918776  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:17.918783  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:17.918845  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:17.950736  620795 cri.go:89] found id: ""
	I1213 12:07:17.950774  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.950784  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:17.950790  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:17.950854  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:17.976775  620795 cri.go:89] found id: ""
	I1213 12:07:17.976799  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.976809  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:17.976816  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:17.976883  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:18.008430  620795 cri.go:89] found id: ""
	I1213 12:07:18.008460  620795 logs.go:282] 0 containers: []
	W1213 12:07:18.008469  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:18.008477  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:18.008564  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:18.037446  620795 cri.go:89] found id: ""
	I1213 12:07:18.037477  620795 logs.go:282] 0 containers: []
	W1213 12:07:18.037488  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:18.037502  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:18.037517  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:18.068414  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:18.068443  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:18.138588  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:18.138627  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:18.155698  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:18.155729  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:18.222792  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:18.215479    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.215981    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.217571    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.217896    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.219409    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:18.215479    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.215981    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.217571    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.217896    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.219409    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:18.222835  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:18.222847  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:07:19.037064  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:21.536199  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:20.751476  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:20.762121  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:20.762190  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:20.818771  620795 cri.go:89] found id: ""
	I1213 12:07:20.818794  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.818803  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:20.818810  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:20.818877  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:20.873533  620795 cri.go:89] found id: ""
	I1213 12:07:20.873556  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.873564  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:20.873581  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:20.873639  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:20.900689  620795 cri.go:89] found id: ""
	I1213 12:07:20.900716  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.900725  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:20.900732  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:20.900790  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:20.926298  620795 cri.go:89] found id: ""
	I1213 12:07:20.926324  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.926334  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:20.926340  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:20.926400  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:20.955692  620795 cri.go:89] found id: ""
	I1213 12:07:20.955767  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.955789  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:20.955808  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:20.955904  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:20.981101  620795 cri.go:89] found id: ""
	I1213 12:07:20.981126  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.981135  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:20.981146  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:20.981208  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:21.012906  620795 cri.go:89] found id: ""
	I1213 12:07:21.012933  620795 logs.go:282] 0 containers: []
	W1213 12:07:21.012942  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:21.012949  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:21.013024  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:21.043717  620795 cri.go:89] found id: ""
	I1213 12:07:21.043743  620795 logs.go:282] 0 containers: []
	W1213 12:07:21.043753  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:21.043764  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:21.043776  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:21.116319  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:21.116368  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:21.133173  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:21.133204  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:21.201103  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:21.193228    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.194101    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.195701    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.196170    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.197510    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:21.193228    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.194101    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.195701    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.196170    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.197510    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:21.201127  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:21.201140  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:21.229422  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:21.229457  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:23.763349  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:23.781088  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:23.781159  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:23.857623  620795 cri.go:89] found id: ""
	I1213 12:07:23.857648  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.857666  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:23.857673  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:23.857736  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:23.882807  620795 cri.go:89] found id: ""
	I1213 12:07:23.882833  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.882842  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:23.882849  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:23.882907  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:23.908402  620795 cri.go:89] found id: ""
	I1213 12:07:23.908430  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.908440  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:23.908447  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:23.908506  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:23.933800  620795 cri.go:89] found id: ""
	I1213 12:07:23.933826  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.933835  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:23.933841  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:23.933919  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:23.959222  620795 cri.go:89] found id: ""
	I1213 12:07:23.959248  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.959259  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:23.959266  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:23.959352  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:23.985470  620795 cri.go:89] found id: ""
	I1213 12:07:23.985496  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.985505  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:23.985512  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:23.985570  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:24.014442  620795 cri.go:89] found id: ""
	I1213 12:07:24.014477  620795 logs.go:282] 0 containers: []
	W1213 12:07:24.014487  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:24.014494  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:24.014556  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:24.043282  620795 cri.go:89] found id: ""
	I1213 12:07:24.043308  620795 logs.go:282] 0 containers: []
	W1213 12:07:24.043318  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:24.043328  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:24.043340  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:24.075046  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:24.075073  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:24.143658  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:24.143701  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:24.160736  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:24.160765  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:24.224652  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:24.215949    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.216643    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.218385    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.218972    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.220693    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:24.215949    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.216643    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.218385    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.218972    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.220693    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:24.224675  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:24.224692  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:07:23.536309  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:25.537129  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:28.037200  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:26.754848  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:26.765356  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:26.765429  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:26.818982  620795 cri.go:89] found id: ""
	I1213 12:07:26.819005  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.819013  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:26.819020  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:26.819078  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:26.871231  620795 cri.go:89] found id: ""
	I1213 12:07:26.871253  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.871262  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:26.871268  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:26.871326  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:26.898363  620795 cri.go:89] found id: ""
	I1213 12:07:26.898443  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.898467  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:26.898486  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:26.898578  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:26.923840  620795 cri.go:89] found id: ""
	I1213 12:07:26.923866  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.923875  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:26.923882  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:26.923940  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:26.952921  620795 cri.go:89] found id: ""
	I1213 12:07:26.952950  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.952960  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:26.952967  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:26.953028  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:26.984162  620795 cri.go:89] found id: ""
	I1213 12:07:26.984188  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.984197  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:26.984203  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:26.984282  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:27.022329  620795 cri.go:89] found id: ""
	I1213 12:07:27.022397  620795 logs.go:282] 0 containers: []
	W1213 12:07:27.022413  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:27.022420  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:27.022479  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:27.048366  620795 cri.go:89] found id: ""
	I1213 12:07:27.048391  620795 logs.go:282] 0 containers: []
	W1213 12:07:27.048401  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:27.048410  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:27.048423  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:27.076996  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:27.077029  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:27.149458  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:27.149509  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:27.167444  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:27.167473  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:27.235232  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:27.227331    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.227820    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.229697    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.230220    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.231699    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:27.227331    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.227820    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.229697    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.230220    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.231699    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:27.235258  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:27.235270  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:07:30.537006  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:33.036221  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:29.764538  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:29.791446  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:29.791560  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:29.844876  620795 cri.go:89] found id: ""
	I1213 12:07:29.844953  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.844976  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:29.844996  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:29.845082  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:29.884357  620795 cri.go:89] found id: ""
	I1213 12:07:29.884423  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.884441  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:29.884449  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:29.884508  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:29.914712  620795 cri.go:89] found id: ""
	I1213 12:07:29.914738  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.914748  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:29.914755  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:29.914813  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:29.940420  620795 cri.go:89] found id: ""
	I1213 12:07:29.940500  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.940516  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:29.940524  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:29.940585  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:29.970378  620795 cri.go:89] found id: ""
	I1213 12:07:29.970404  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.970413  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:29.970420  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:29.970478  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:29.996803  620795 cri.go:89] found id: ""
	I1213 12:07:29.996881  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.996898  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:29.996907  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:29.996983  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:30.040874  620795 cri.go:89] found id: ""
	I1213 12:07:30.040904  620795 logs.go:282] 0 containers: []
	W1213 12:07:30.040913  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:30.040920  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:30.040995  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:30.083632  620795 cri.go:89] found id: ""
	I1213 12:07:30.083658  620795 logs.go:282] 0 containers: []
	W1213 12:07:30.083667  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:30.083676  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:30.083689  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:30.149516  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:30.149553  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:30.167731  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:30.167816  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:30.233503  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:30.225039   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.225442   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.227057   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.227805   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.229579   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:30.225039   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.225442   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.227057   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.227805   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.229579   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:30.233567  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:30.233586  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:30.263464  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:30.263497  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:32.796303  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:32.813180  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:32.813263  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:32.849335  620795 cri.go:89] found id: ""
	I1213 12:07:32.849413  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.849456  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:32.849481  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:32.849570  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:32.880068  620795 cri.go:89] found id: ""
	I1213 12:07:32.880092  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.880101  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:32.880107  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:32.880165  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:32.907166  620795 cri.go:89] found id: ""
	I1213 12:07:32.907193  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.907202  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:32.907209  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:32.907266  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:32.933296  620795 cri.go:89] found id: ""
	I1213 12:07:32.933366  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.933388  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:32.933407  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:32.933500  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:32.959040  620795 cri.go:89] found id: ""
	I1213 12:07:32.959106  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.959130  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:32.959149  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:32.959233  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:32.989508  620795 cri.go:89] found id: ""
	I1213 12:07:32.989531  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.989540  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:32.989546  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:32.989629  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:33.018978  620795 cri.go:89] found id: ""
	I1213 12:07:33.019002  620795 logs.go:282] 0 containers: []
	W1213 12:07:33.019010  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:33.019017  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:33.019098  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:33.046327  620795 cri.go:89] found id: ""
	I1213 12:07:33.046359  620795 logs.go:282] 0 containers: []
	W1213 12:07:33.046368  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:33.046378  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:33.046419  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:33.075176  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:33.075213  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:33.107277  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:33.107309  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:33.174349  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:33.174384  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:33.192737  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:33.192770  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:33.259992  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:33.251960   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.252364   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.253955   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.254311   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.255985   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:33.251960   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.252364   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.253955   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.254311   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.255985   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:07:35.037005  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:37.037071  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:35.760267  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:35.771899  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:35.771965  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:35.816451  620795 cri.go:89] found id: ""
	I1213 12:07:35.816499  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.816508  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:35.816519  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:35.816576  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:35.874010  620795 cri.go:89] found id: ""
	I1213 12:07:35.874031  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.874040  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:35.874046  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:35.874109  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:35.901470  620795 cri.go:89] found id: ""
	I1213 12:07:35.901499  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.901509  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:35.901515  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:35.901577  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:35.929967  620795 cri.go:89] found id: ""
	I1213 12:07:35.929988  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.929997  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:35.930004  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:35.930061  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:35.959220  620795 cri.go:89] found id: ""
	I1213 12:07:35.959245  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.959255  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:35.959262  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:35.959323  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:35.988889  620795 cri.go:89] found id: ""
	I1213 12:07:35.988916  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.988925  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:35.988932  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:35.988990  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:36.017868  620795 cri.go:89] found id: ""
	I1213 12:07:36.017896  620795 logs.go:282] 0 containers: []
	W1213 12:07:36.017906  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:36.017912  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:36.017975  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:36.046482  620795 cri.go:89] found id: ""
	I1213 12:07:36.046508  620795 logs.go:282] 0 containers: []
	W1213 12:07:36.046517  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:36.046527  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:36.046539  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:36.063480  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:36.063675  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:36.134374  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:36.125215   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.125817   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.127378   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.127950   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.129158   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:36.125215   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.125817   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.127378   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.127950   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.129158   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:36.134437  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:36.134465  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:36.164786  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:36.164831  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:36.195048  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:36.195077  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:38.762384  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:38.773774  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:38.773860  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:38.823096  620795 cri.go:89] found id: ""
	I1213 12:07:38.823118  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.823127  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:38.823133  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:38.823192  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:38.859735  620795 cri.go:89] found id: ""
	I1213 12:07:38.859758  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.859766  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:38.859773  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:38.859832  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:38.888780  620795 cri.go:89] found id: ""
	I1213 12:07:38.888806  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.888815  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:38.888821  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:38.888885  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:38.918480  620795 cri.go:89] found id: ""
	I1213 12:07:38.918506  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.918516  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:38.918522  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:38.918579  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:38.944442  620795 cri.go:89] found id: ""
	I1213 12:07:38.944475  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.944485  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:38.944492  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:38.944548  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:38.972111  620795 cri.go:89] found id: ""
	I1213 12:07:38.972138  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.972148  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:38.972156  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:38.972217  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:38.999220  620795 cri.go:89] found id: ""
	I1213 12:07:38.999249  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.999259  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:38.999266  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:38.999387  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:39.027462  620795 cri.go:89] found id: ""
	I1213 12:07:39.027489  620795 logs.go:282] 0 containers: []
	W1213 12:07:39.027498  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:39.027508  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:39.027551  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:39.045387  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:39.045421  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:39.113555  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:39.104411   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.105461   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.106402   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.108045   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.108696   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:39.104411   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.105461   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.106402   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.108045   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.108696   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:39.113577  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:39.113591  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:39.141868  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:39.141905  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:39.170660  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:39.170687  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:07:39.536473  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:41.536533  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:41.738914  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:41.749712  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:41.749788  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:41.815733  620795 cri.go:89] found id: ""
	I1213 12:07:41.815757  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.815767  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:41.815774  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:41.815837  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:41.853772  620795 cri.go:89] found id: ""
	I1213 12:07:41.853794  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.853802  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:41.853808  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:41.853864  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:41.880989  620795 cri.go:89] found id: ""
	I1213 12:07:41.881012  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.881021  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:41.881027  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:41.881085  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:41.910432  620795 cri.go:89] found id: ""
	I1213 12:07:41.910455  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.910464  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:41.910470  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:41.910525  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:41.938539  620795 cri.go:89] found id: ""
	I1213 12:07:41.938561  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.938570  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:41.938576  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:41.938636  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:41.964574  620795 cri.go:89] found id: ""
	I1213 12:07:41.964608  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.964617  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:41.964624  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:41.964681  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:41.989355  620795 cri.go:89] found id: ""
	I1213 12:07:41.989380  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.989389  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:41.989396  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:41.989456  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:42.019802  620795 cri.go:89] found id: ""
	I1213 12:07:42.019830  620795 logs.go:282] 0 containers: []
	W1213 12:07:42.019839  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:42.019849  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:42.019861  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:42.052058  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:42.052087  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:42.123300  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:42.123360  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:42.144729  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:42.144768  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:42.227868  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:42.217286   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.218234   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.220463   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.221227   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.223007   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:42.217286   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.218234   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.220463   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.221227   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.223007   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:42.227896  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:42.227910  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:07:44.037002  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:46.037183  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:44.760193  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:44.770916  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:44.770989  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:44.803100  620795 cri.go:89] found id: ""
	I1213 12:07:44.803124  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.803133  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:44.803140  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:44.803195  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:44.851212  620795 cri.go:89] found id: ""
	I1213 12:07:44.851235  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.851244  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:44.851250  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:44.851307  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:44.902052  620795 cri.go:89] found id: ""
	I1213 12:07:44.902075  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.902084  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:44.902090  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:44.902150  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:44.933898  620795 cri.go:89] found id: ""
	I1213 12:07:44.933926  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.933935  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:44.933942  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:44.934026  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:44.963132  620795 cri.go:89] found id: ""
	I1213 12:07:44.963158  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.963167  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:44.963174  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:44.963261  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:44.988132  620795 cri.go:89] found id: ""
	I1213 12:07:44.988163  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.988174  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:44.988181  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:44.988238  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:45.046906  620795 cri.go:89] found id: ""
	I1213 12:07:45.046934  620795 logs.go:282] 0 containers: []
	W1213 12:07:45.046943  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:45.046951  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:45.047019  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:45.080632  620795 cri.go:89] found id: ""
	I1213 12:07:45.080730  620795 logs.go:282] 0 containers: []
	W1213 12:07:45.080752  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:45.080792  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:45.080810  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:45.157685  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:45.157797  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:45.212507  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:45.212574  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:45.292666  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:45.284764   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.285529   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.287091   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.287398   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.288940   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:45.284764   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.285529   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.287091   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.287398   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.288940   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:45.292707  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:45.292720  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:45.321658  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:45.321690  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:47.858977  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:47.870353  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:47.870425  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:47.902849  620795 cri.go:89] found id: ""
	I1213 12:07:47.902874  620795 logs.go:282] 0 containers: []
	W1213 12:07:47.902883  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:47.902890  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:47.902958  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:47.928841  620795 cri.go:89] found id: ""
	I1213 12:07:47.928866  620795 logs.go:282] 0 containers: []
	W1213 12:07:47.928875  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:47.928882  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:47.928943  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:47.954469  620795 cri.go:89] found id: ""
	I1213 12:07:47.954494  620795 logs.go:282] 0 containers: []
	W1213 12:07:47.954503  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:47.954510  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:47.954571  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:47.984225  620795 cri.go:89] found id: ""
	I1213 12:07:47.984248  620795 logs.go:282] 0 containers: []
	W1213 12:07:47.984257  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:47.984263  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:47.984327  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:48.013666  620795 cri.go:89] found id: ""
	I1213 12:07:48.013694  620795 logs.go:282] 0 containers: []
	W1213 12:07:48.013704  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:48.013710  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:48.013776  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:48.043313  620795 cri.go:89] found id: ""
	I1213 12:07:48.043341  620795 logs.go:282] 0 containers: []
	W1213 12:07:48.043351  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:48.043358  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:48.043445  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:48.070641  620795 cri.go:89] found id: ""
	I1213 12:07:48.070669  620795 logs.go:282] 0 containers: []
	W1213 12:07:48.070680  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:48.070687  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:48.070767  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:48.096729  620795 cri.go:89] found id: ""
	I1213 12:07:48.096754  620795 logs.go:282] 0 containers: []
	W1213 12:07:48.096764  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:48.096773  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:48.096785  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:48.129289  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:48.129318  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:48.196743  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:48.196781  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:48.213775  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:48.213802  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:48.282000  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:48.273477   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.274412   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.276291   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.276931   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.278357   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:48.273477   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.274412   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.276291   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.276931   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.278357   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:48.282076  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:48.282104  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:07:48.537001  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:50.537083  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:53.037078  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:50.813946  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:50.834838  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:50.834928  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:50.871307  620795 cri.go:89] found id: ""
	I1213 12:07:50.871329  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.871337  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:50.871343  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:50.871400  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:50.900887  620795 cri.go:89] found id: ""
	I1213 12:07:50.900913  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.900922  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:50.900929  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:50.900987  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:50.926497  620795 cri.go:89] found id: ""
	I1213 12:07:50.926569  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.926606  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:50.926631  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:50.926721  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:50.954230  620795 cri.go:89] found id: ""
	I1213 12:07:50.954256  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.954266  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:50.954273  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:50.954331  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:50.980389  620795 cri.go:89] found id: ""
	I1213 12:07:50.980414  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.980425  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:50.980431  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:50.980490  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:51.007396  620795 cri.go:89] found id: ""
	I1213 12:07:51.007423  620795 logs.go:282] 0 containers: []
	W1213 12:07:51.007433  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:51.007444  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:51.007507  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:51.038515  620795 cri.go:89] found id: ""
	I1213 12:07:51.038540  620795 logs.go:282] 0 containers: []
	W1213 12:07:51.038550  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:51.038556  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:51.038611  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:51.066063  620795 cri.go:89] found id: ""
	I1213 12:07:51.066088  620795 logs.go:282] 0 containers: []
	W1213 12:07:51.066096  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:51.066111  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:51.066122  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:51.131363  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:51.131402  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:51.148223  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:51.148253  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:51.211768  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:51.204250   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.204888   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.206374   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.206860   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.208288   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:51.204250   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.204888   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.206374   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.206860   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.208288   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:51.211791  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:51.211807  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:51.239792  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:51.239825  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:53.772909  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:53.794190  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:53.794255  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:53.863195  620795 cri.go:89] found id: ""
	I1213 12:07:53.863228  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.863239  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:53.863246  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:53.863323  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:53.894744  620795 cri.go:89] found id: ""
	I1213 12:07:53.894812  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.894836  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:53.894855  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:53.894941  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:53.922176  620795 cri.go:89] found id: ""
	I1213 12:07:53.922244  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.922266  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:53.922284  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:53.922371  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:53.948409  620795 cri.go:89] found id: ""
	I1213 12:07:53.948437  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.948446  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:53.948453  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:53.948512  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:53.974142  620795 cri.go:89] found id: ""
	I1213 12:07:53.974222  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.974244  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:53.974263  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:53.974369  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:54.002307  620795 cri.go:89] found id: ""
	I1213 12:07:54.002343  620795 logs.go:282] 0 containers: []
	W1213 12:07:54.002353  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:54.002361  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:54.002440  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:54.030334  620795 cri.go:89] found id: ""
	I1213 12:07:54.030413  620795 logs.go:282] 0 containers: []
	W1213 12:07:54.030438  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:54.030457  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:54.030566  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:54.056614  620795 cri.go:89] found id: ""
	I1213 12:07:54.056697  620795 logs.go:282] 0 containers: []
	W1213 12:07:54.056713  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:54.056724  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:54.056737  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:54.124215  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:54.124253  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:54.141024  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:54.141052  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:54.203423  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:54.195491   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.196247   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.197856   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.198486   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.200023   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:54.195491   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.196247   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.197856   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.198486   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.200023   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:54.203445  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:54.203457  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:54.231323  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:54.231355  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:07:55.037200  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:57.537019  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:56.762827  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:56.786084  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:56.786208  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:56.855486  620795 cri.go:89] found id: ""
	I1213 12:07:56.855531  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.855542  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:56.855549  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:56.855615  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:56.883436  620795 cri.go:89] found id: ""
	I1213 12:07:56.883531  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.883557  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:56.883587  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:56.883648  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:56.908626  620795 cri.go:89] found id: ""
	I1213 12:07:56.908708  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.908739  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:56.908752  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:56.908821  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:56.935174  620795 cri.go:89] found id: ""
	I1213 12:07:56.935201  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.935210  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:56.935217  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:56.935302  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:56.964101  620795 cri.go:89] found id: ""
	I1213 12:07:56.964128  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.964139  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:56.964146  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:56.964232  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:56.989991  620795 cri.go:89] found id: ""
	I1213 12:07:56.990016  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.990025  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:56.990032  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:56.990117  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:57.021908  620795 cri.go:89] found id: ""
	I1213 12:07:57.021934  620795 logs.go:282] 0 containers: []
	W1213 12:07:57.021944  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:57.021952  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:57.022015  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:57.050893  620795 cri.go:89] found id: ""
	I1213 12:07:57.050919  620795 logs.go:282] 0 containers: []
	W1213 12:07:57.050929  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:57.050939  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:57.050958  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:57.114649  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:57.107304   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.107896   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.109344   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.109787   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.111210   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:57.107304   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.107896   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.109344   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.109787   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.111210   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:57.114709  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:57.114743  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:57.142743  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:57.142778  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:57.171088  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:57.171120  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:57.236905  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:57.236948  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:08:00.039297  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:02.536522  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:59.754255  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:59.764877  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:59.764948  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:59.800655  620795 cri.go:89] found id: ""
	I1213 12:07:59.800682  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.800691  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:59.800698  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:59.800757  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:59.844261  620795 cri.go:89] found id: ""
	I1213 12:07:59.844289  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.844299  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:59.844305  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:59.844363  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:59.890278  620795 cri.go:89] found id: ""
	I1213 12:07:59.890303  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.890313  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:59.890319  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:59.890379  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:59.918606  620795 cri.go:89] found id: ""
	I1213 12:07:59.918632  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.918641  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:59.918647  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:59.918703  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:59.947895  620795 cri.go:89] found id: ""
	I1213 12:07:59.947918  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.947928  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:59.947934  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:59.947993  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:59.973045  620795 cri.go:89] found id: ""
	I1213 12:07:59.973073  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.973082  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:59.973089  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:59.973163  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:00.009231  620795 cri.go:89] found id: ""
	I1213 12:08:00.009320  620795 logs.go:282] 0 containers: []
	W1213 12:08:00.009353  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:00.009374  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:00.009507  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:00.119476  620795 cri.go:89] found id: ""
	I1213 12:08:00.119618  620795 logs.go:282] 0 containers: []
	W1213 12:08:00.119644  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:00.119687  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:00.119721  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:00.145226  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:00.145450  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:00.282893  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:00.266048   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.266988   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.274032   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.274509   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.276639   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:00.266048   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.266988   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.274032   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.274509   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.276639   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:00.282923  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:00.282944  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:00.371336  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:00.371439  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:00.430461  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:00.430503  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:03.002113  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:03.014603  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:03.014679  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:03.042673  620795 cri.go:89] found id: ""
	I1213 12:08:03.042701  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.042711  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:03.042718  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:03.042778  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:03.074056  620795 cri.go:89] found id: ""
	I1213 12:08:03.074133  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.074164  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:03.074185  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:03.074301  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:03.101450  620795 cri.go:89] found id: ""
	I1213 12:08:03.101485  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.101495  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:03.101502  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:03.101564  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:03.132013  620795 cri.go:89] found id: ""
	I1213 12:08:03.132042  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.132053  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:03.132060  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:03.132123  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:03.158035  620795 cri.go:89] found id: ""
	I1213 12:08:03.158057  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.158067  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:03.158074  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:03.158131  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:03.183772  620795 cri.go:89] found id: ""
	I1213 12:08:03.183800  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.183809  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:03.183816  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:03.183879  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:03.209685  620795 cri.go:89] found id: ""
	I1213 12:08:03.209710  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.209718  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:03.209725  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:03.209809  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:03.238718  620795 cri.go:89] found id: ""
	I1213 12:08:03.238742  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.238751  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:03.238760  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:03.238771  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:03.266176  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:03.266211  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:03.295327  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:03.295357  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:03.371751  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:03.371796  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:03.388535  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:03.388569  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:03.455075  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:03.446801   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.447400   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.448900   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.449492   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.451125   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:03.446801   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.447400   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.448900   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.449492   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.451125   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:08:05.037001  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:07.037153  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:05.956468  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:05.967247  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:05.967349  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:05.992470  620795 cri.go:89] found id: ""
	I1213 12:08:05.992495  620795 logs.go:282] 0 containers: []
	W1213 12:08:05.992504  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:05.992510  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:05.992576  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:06.025309  620795 cri.go:89] found id: ""
	I1213 12:08:06.025339  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.025349  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:06.025356  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:06.025417  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:06.056164  620795 cri.go:89] found id: ""
	I1213 12:08:06.056192  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.056202  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:06.056208  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:06.056268  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:06.091020  620795 cri.go:89] found id: ""
	I1213 12:08:06.091047  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.091057  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:06.091063  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:06.091124  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:06.117741  620795 cri.go:89] found id: ""
	I1213 12:08:06.117767  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.117776  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:06.117792  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:06.117850  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:06.143430  620795 cri.go:89] found id: ""
	I1213 12:08:06.143454  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.143465  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:06.143472  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:06.143558  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:06.169857  620795 cri.go:89] found id: ""
	I1213 12:08:06.169883  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.169892  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:06.169899  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:06.169959  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:06.196298  620795 cri.go:89] found id: ""
	I1213 12:08:06.196325  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.196335  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:06.196344  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:06.196385  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:06.212572  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:06.212599  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:06.278450  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:06.270268   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.270834   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.272354   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.273016   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.274527   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:06.270268   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.270834   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.272354   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.273016   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.274527   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:06.278473  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:06.278485  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:06.306640  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:06.306679  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:06.336266  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:06.336295  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:08.901791  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:08.912829  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:08.912897  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:08.942435  620795 cri.go:89] found id: ""
	I1213 12:08:08.942467  620795 logs.go:282] 0 containers: []
	W1213 12:08:08.942476  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:08.942483  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:08.942552  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:08.968397  620795 cri.go:89] found id: ""
	I1213 12:08:08.968475  620795 logs.go:282] 0 containers: []
	W1213 12:08:08.968508  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:08.968533  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:08.968615  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:08.995667  620795 cri.go:89] found id: ""
	I1213 12:08:08.995734  620795 logs.go:282] 0 containers: []
	W1213 12:08:08.995757  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:08.995776  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:08.995851  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:09.026748  620795 cri.go:89] found id: ""
	I1213 12:08:09.026827  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.026859  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:09.026878  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:09.026961  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:09.052881  620795 cri.go:89] found id: ""
	I1213 12:08:09.052910  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.052919  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:09.052926  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:09.053016  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:09.079635  620795 cri.go:89] found id: ""
	I1213 12:08:09.079663  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.079673  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:09.079679  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:09.079740  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:09.106465  620795 cri.go:89] found id: ""
	I1213 12:08:09.106499  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.106507  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:09.106529  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:09.106610  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:09.132296  620795 cri.go:89] found id: ""
	I1213 12:08:09.132373  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.132389  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:09.132400  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:09.132411  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:09.198891  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:09.198937  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:09.215689  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:09.215718  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:09.536381  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:11.536495  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:09.283376  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:09.275383   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.276074   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.277779   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.278245   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.279888   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:09.275383   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.276074   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.277779   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.278245   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.279888   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:09.283399  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:09.283412  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:09.311953  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:09.311995  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:11.844673  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:11.854957  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:11.855031  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:11.884334  620795 cri.go:89] found id: ""
	I1213 12:08:11.884361  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.884370  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:11.884377  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:11.884438  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:11.911693  620795 cri.go:89] found id: ""
	I1213 12:08:11.911715  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.911724  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:11.911730  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:11.911785  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:11.939653  620795 cri.go:89] found id: ""
	I1213 12:08:11.939679  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.939688  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:11.939694  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:11.939753  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:11.965596  620795 cri.go:89] found id: ""
	I1213 12:08:11.965622  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.965631  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:11.965639  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:11.965695  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:11.994822  620795 cri.go:89] found id: ""
	I1213 12:08:11.994848  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.994857  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:11.994863  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:11.994921  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:12.027085  620795 cri.go:89] found id: ""
	I1213 12:08:12.027111  620795 logs.go:282] 0 containers: []
	W1213 12:08:12.027119  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:12.027127  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:12.027189  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:12.060592  620795 cri.go:89] found id: ""
	I1213 12:08:12.060621  620795 logs.go:282] 0 containers: []
	W1213 12:08:12.060631  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:12.060637  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:12.060695  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:12.087001  620795 cri.go:89] found id: ""
	I1213 12:08:12.087026  620795 logs.go:282] 0 containers: []
	W1213 12:08:12.087035  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:12.087046  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:12.087057  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:12.154968  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:12.155007  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:12.173266  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:12.173296  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:12.238320  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:12.230047   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.230756   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.232467   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.233052   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.234716   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:12.230047   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.230756   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.232467   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.233052   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.234716   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:12.238342  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:12.238353  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:12.266852  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:12.266886  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:08:14.037082  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:16.537099  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:14.799502  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:14.811316  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:14.811495  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:14.868310  620795 cri.go:89] found id: ""
	I1213 12:08:14.868404  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.868430  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:14.868485  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:14.868662  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:14.910677  620795 cri.go:89] found id: ""
	I1213 12:08:14.910744  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.910766  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:14.910785  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:14.910872  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:14.939727  620795 cri.go:89] found id: ""
	I1213 12:08:14.939767  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.939777  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:14.939783  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:14.939849  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:14.966035  620795 cri.go:89] found id: ""
	I1213 12:08:14.966069  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.966078  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:14.966086  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:14.966160  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:14.994530  620795 cri.go:89] found id: ""
	I1213 12:08:14.994596  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.994619  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:14.994641  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:14.994727  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:15.032176  620795 cri.go:89] found id: ""
	I1213 12:08:15.032213  620795 logs.go:282] 0 containers: []
	W1213 12:08:15.032223  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:15.032230  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:15.032294  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:15.063866  620795 cri.go:89] found id: ""
	I1213 12:08:15.063900  620795 logs.go:282] 0 containers: []
	W1213 12:08:15.063910  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:15.063916  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:15.063977  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:15.094824  620795 cri.go:89] found id: ""
	I1213 12:08:15.094857  620795 logs.go:282] 0 containers: []
	W1213 12:08:15.094867  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:15.094876  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:15.094888  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:15.123857  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:15.123926  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:15.189408  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:15.189444  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:15.208112  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:15.208143  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:15.272770  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:15.265015   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.265421   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.266883   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.267540   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.269262   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:15.265015   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.265421   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.266883   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.267540   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.269262   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:15.272794  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:15.272806  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:17.802242  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:17.818907  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:17.818976  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:17.860553  620795 cri.go:89] found id: ""
	I1213 12:08:17.860577  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.860586  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:17.860594  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:17.860663  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:17.890844  620795 cri.go:89] found id: ""
	I1213 12:08:17.890868  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.890877  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:17.890883  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:17.890937  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:17.916758  620795 cri.go:89] found id: ""
	I1213 12:08:17.916784  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.916794  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:17.916800  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:17.916860  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:17.946527  620795 cri.go:89] found id: ""
	I1213 12:08:17.946564  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.946573  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:17.946598  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:17.946684  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:17.971981  620795 cri.go:89] found id: ""
	I1213 12:08:17.972004  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.972013  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:17.972020  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:17.972075  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:17.997005  620795 cri.go:89] found id: ""
	I1213 12:08:17.997042  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.997052  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:17.997059  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:17.997126  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:18.029007  620795 cri.go:89] found id: ""
	I1213 12:08:18.029038  620795 logs.go:282] 0 containers: []
	W1213 12:08:18.029054  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:18.029061  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:18.029120  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:18.056596  620795 cri.go:89] found id: ""
	I1213 12:08:18.056625  620795 logs.go:282] 0 containers: []
	W1213 12:08:18.056637  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:18.056647  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:18.056661  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:18.074846  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:18.074874  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:18.144092  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:18.136489   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.137142   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.138620   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.139127   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.140582   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:18.136489   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.137142   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.138620   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.139127   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.140582   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:18.144157  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:18.144176  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:18.173096  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:18.173134  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:18.208914  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:18.208943  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:08:19.037143  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:21.537005  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:20.774528  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:20.788572  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:20.788639  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:20.858764  620795 cri.go:89] found id: ""
	I1213 12:08:20.858786  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.858794  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:20.858800  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:20.858857  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:20.887866  620795 cri.go:89] found id: ""
	I1213 12:08:20.887888  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.887897  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:20.887904  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:20.887967  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:20.918367  620795 cri.go:89] found id: ""
	I1213 12:08:20.918438  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.918462  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:20.918481  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:20.918566  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:20.943267  620795 cri.go:89] found id: ""
	I1213 12:08:20.943292  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.943301  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:20.943308  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:20.943362  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:20.972672  620795 cri.go:89] found id: ""
	I1213 12:08:20.972707  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.972716  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:20.972723  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:20.972781  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:20.997368  620795 cri.go:89] found id: ""
	I1213 12:08:20.997394  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.997404  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:20.997411  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:20.997487  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:21.029283  620795 cri.go:89] found id: ""
	I1213 12:08:21.029309  620795 logs.go:282] 0 containers: []
	W1213 12:08:21.029319  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:21.029328  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:21.029382  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:21.054485  620795 cri.go:89] found id: ""
	I1213 12:08:21.054510  620795 logs.go:282] 0 containers: []
	W1213 12:08:21.054520  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:21.054529  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:21.054540  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:21.121036  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:21.121073  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:21.137498  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:21.137526  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:21.201021  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:21.192527   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.193441   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.195064   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.195396   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.196967   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:21.192527   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.193441   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.195064   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.195396   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.196967   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:21.201047  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:21.201060  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:21.233120  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:21.233155  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:23.768528  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:23.784788  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:23.784875  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:23.861902  620795 cri.go:89] found id: ""
	I1213 12:08:23.861933  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.861949  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:23.861956  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:23.862019  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:23.890007  620795 cri.go:89] found id: ""
	I1213 12:08:23.890029  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.890038  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:23.890044  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:23.890104  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:23.915427  620795 cri.go:89] found id: ""
	I1213 12:08:23.915450  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.915459  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:23.915465  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:23.915550  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:23.941041  620795 cri.go:89] found id: ""
	I1213 12:08:23.941069  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.941078  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:23.941085  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:23.941141  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:23.966860  620795 cri.go:89] found id: ""
	I1213 12:08:23.966886  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.966895  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:23.966902  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:23.966958  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:23.992499  620795 cri.go:89] found id: ""
	I1213 12:08:23.992528  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.992537  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:23.992558  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:23.992616  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:24.019996  620795 cri.go:89] found id: ""
	I1213 12:08:24.020030  620795 logs.go:282] 0 containers: []
	W1213 12:08:24.020045  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:24.020052  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:24.020129  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:24.047181  620795 cri.go:89] found id: ""
	I1213 12:08:24.047216  620795 logs.go:282] 0 containers: []
	W1213 12:08:24.047225  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:24.047234  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:24.047245  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:24.110372  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:24.102615   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.103224   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.104739   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.105164   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.106663   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:24.102615   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.103224   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.104739   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.105164   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.106663   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:24.110398  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:24.110412  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:24.139714  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:24.139748  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:24.172397  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:24.172426  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:08:24.037139  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:26.537138  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:24.240938  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:24.240975  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:26.757922  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:26.771140  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:26.771256  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:26.808049  620795 cri.go:89] found id: ""
	I1213 12:08:26.808124  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.808149  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:26.808169  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:26.808258  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:26.845750  620795 cri.go:89] found id: ""
	I1213 12:08:26.845826  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.845851  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:26.845870  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:26.845951  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:26.885327  620795 cri.go:89] found id: ""
	I1213 12:08:26.885401  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.885424  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:26.885444  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:26.885533  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:26.912813  620795 cri.go:89] found id: ""
	I1213 12:08:26.912844  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.912853  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:26.912860  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:26.912917  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:26.940224  620795 cri.go:89] found id: ""
	I1213 12:08:26.940301  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.940317  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:26.940325  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:26.940383  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:26.970684  620795 cri.go:89] found id: ""
	I1213 12:08:26.970728  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.970738  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:26.970745  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:26.970825  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:27.001739  620795 cri.go:89] found id: ""
	I1213 12:08:27.001821  620795 logs.go:282] 0 containers: []
	W1213 12:08:27.001846  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:27.001867  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:27.001968  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:27.029502  620795 cri.go:89] found id: ""
	I1213 12:08:27.029525  620795 logs.go:282] 0 containers: []
	W1213 12:08:27.029533  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:27.029542  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:27.029561  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:27.097411  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:27.090200   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.090583   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.092154   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.092579   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.093994   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:27.090200   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.090583   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.092154   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.092579   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.093994   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:27.097433  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:27.097445  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:27.126207  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:27.126242  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:27.152776  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:27.152814  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:27.218430  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:27.218466  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:08:29.036447  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:31.536317  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:29.735087  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:29.746276  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:29.746353  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:29.790488  620795 cri.go:89] found id: ""
	I1213 12:08:29.790563  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.790587  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:29.790607  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:29.790694  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:29.863661  620795 cri.go:89] found id: ""
	I1213 12:08:29.863730  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.863747  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:29.863754  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:29.863822  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:29.889696  620795 cri.go:89] found id: ""
	I1213 12:08:29.889723  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.889731  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:29.889738  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:29.889793  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:29.917557  620795 cri.go:89] found id: ""
	I1213 12:08:29.917619  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.917642  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:29.917657  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:29.917732  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:29.941179  620795 cri.go:89] found id: ""
	I1213 12:08:29.941201  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.941210  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:29.941217  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:29.941276  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:29.965683  620795 cri.go:89] found id: ""
	I1213 12:08:29.965758  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.965775  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:29.965783  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:29.965858  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:29.994076  620795 cri.go:89] found id: ""
	I1213 12:08:29.994111  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.994121  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:29.994127  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:29.994189  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:30.034696  620795 cri.go:89] found id: ""
	I1213 12:08:30.034723  620795 logs.go:282] 0 containers: []
	W1213 12:08:30.034733  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:30.034743  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:30.034756  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:30.103277  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:30.103319  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:30.120811  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:30.120901  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:30.194375  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:30.185897   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.186387   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.187817   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.188577   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.190599   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:30.185897   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.186387   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.187817   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.188577   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.190599   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:30.194399  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:30.194412  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:30.225794  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:30.225830  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:32.757391  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:32.768065  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:32.768178  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:32.801083  620795 cri.go:89] found id: ""
	I1213 12:08:32.801105  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.801114  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:32.801123  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:32.801179  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:32.839546  620795 cri.go:89] found id: ""
	I1213 12:08:32.839567  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.839576  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:32.839582  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:32.839637  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:32.888939  620795 cri.go:89] found id: ""
	I1213 12:08:32.889005  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.889029  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:32.889044  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:32.889115  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:32.926624  620795 cri.go:89] found id: ""
	I1213 12:08:32.926651  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.926666  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:32.926676  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:32.926752  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:32.958800  620795 cri.go:89] found id: ""
	I1213 12:08:32.958835  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.958844  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:32.958850  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:32.958916  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:32.989617  620795 cri.go:89] found id: ""
	I1213 12:08:32.989692  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.989708  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:32.989721  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:32.989791  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:33.017551  620795 cri.go:89] found id: ""
	I1213 12:08:33.017623  620795 logs.go:282] 0 containers: []
	W1213 12:08:33.017647  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:33.017659  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:33.017736  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:33.043587  620795 cri.go:89] found id: ""
	I1213 12:08:33.043612  620795 logs.go:282] 0 containers: []
	W1213 12:08:33.043621  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:33.043632  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:33.043644  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:33.114830  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:33.105828   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.106521   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.108296   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.108871   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.110537   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:33.105828   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.106521   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.108296   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.108871   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.110537   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:33.114904  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:33.114923  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:33.144060  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:33.144098  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:33.174527  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:33.174559  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:33.242589  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:33.242622  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:08:33.536995  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:35.537098  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:38.037111  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:35.760100  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:35.770376  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:35.770444  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:35.803335  620795 cri.go:89] found id: ""
	I1213 12:08:35.803356  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.803365  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:35.803371  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:35.803427  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:35.837892  620795 cri.go:89] found id: ""
	I1213 12:08:35.837916  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.837926  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:35.837933  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:35.837989  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:35.866561  620795 cri.go:89] found id: ""
	I1213 12:08:35.866588  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.866598  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:35.866605  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:35.866667  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:35.892759  620795 cri.go:89] found id: ""
	I1213 12:08:35.892795  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.892804  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:35.892810  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:35.892880  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:35.923215  620795 cri.go:89] found id: ""
	I1213 12:08:35.923238  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.923247  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:35.923252  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:35.923310  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:35.950448  620795 cri.go:89] found id: ""
	I1213 12:08:35.950475  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.950484  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:35.950491  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:35.950546  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:35.976121  620795 cri.go:89] found id: ""
	I1213 12:08:35.976149  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.976158  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:35.976165  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:35.976247  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:36.007726  620795 cri.go:89] found id: ""
	I1213 12:08:36.007754  620795 logs.go:282] 0 containers: []
	W1213 12:08:36.007765  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:36.007774  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:36.007789  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:36.085423  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:36.085465  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:36.104590  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:36.104621  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:36.174734  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:36.166755   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.167389   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.169214   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.169622   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.171073   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:36.166755   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.167389   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.169214   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.169622   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.171073   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:36.174757  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:36.174771  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:36.204232  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:36.204271  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:38.733384  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:38.744052  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:38.744118  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:38.780661  620795 cri.go:89] found id: ""
	I1213 12:08:38.780685  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.780694  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:38.780704  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:38.780764  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:38.822383  620795 cri.go:89] found id: ""
	I1213 12:08:38.822407  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.822416  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:38.822422  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:38.822477  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:38.855498  620795 cri.go:89] found id: ""
	I1213 12:08:38.855544  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.855553  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:38.855565  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:38.855619  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:38.885018  620795 cri.go:89] found id: ""
	I1213 12:08:38.885045  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.885055  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:38.885062  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:38.885119  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:38.910126  620795 cri.go:89] found id: ""
	I1213 12:08:38.910162  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.910172  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:38.910179  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:38.910246  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:38.940467  620795 cri.go:89] found id: ""
	I1213 12:08:38.940502  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.940513  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:38.940520  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:38.940597  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:38.966188  620795 cri.go:89] found id: ""
	I1213 12:08:38.966222  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.966232  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:38.966238  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:38.966303  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:38.995881  620795 cri.go:89] found id: ""
	I1213 12:08:38.995907  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.995917  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:38.995927  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:38.995939  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:39.015887  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:39.015917  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:39.098130  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:39.090344   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.090891   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.092783   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.093197   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.094699   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:39.090344   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.090891   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.092783   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.093197   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.094699   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:39.098150  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:39.098163  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:39.126236  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:39.126269  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:39.153815  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:39.153842  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:08:40.037886  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:42.536996  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:41.721729  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:41.732158  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:41.732229  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:41.760995  620795 cri.go:89] found id: ""
	I1213 12:08:41.761017  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.761026  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:41.761033  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:41.761087  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:41.795082  620795 cri.go:89] found id: ""
	I1213 12:08:41.795105  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.795113  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:41.795119  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:41.795184  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:41.825959  620795 cri.go:89] found id: ""
	I1213 12:08:41.826033  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.826056  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:41.826076  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:41.826159  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:41.852118  620795 cri.go:89] found id: ""
	I1213 12:08:41.852183  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.852198  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:41.852205  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:41.852261  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:41.877587  620795 cri.go:89] found id: ""
	I1213 12:08:41.877626  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.877636  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:41.877642  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:41.877706  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:41.906166  620795 cri.go:89] found id: ""
	I1213 12:08:41.906192  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.906202  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:41.906216  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:41.906273  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:41.935663  620795 cri.go:89] found id: ""
	I1213 12:08:41.935688  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.935697  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:41.935704  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:41.935761  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:41.960919  620795 cri.go:89] found id: ""
	I1213 12:08:41.960943  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.960952  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:41.960960  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:41.960971  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:41.989438  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:41.989472  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:42.026694  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:42.026779  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:42.120242  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:42.120297  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:42.141212  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:42.141246  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:42.216949  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:42.207789   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.208642   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.210144   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.210924   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.212786   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:42.207789   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.208642   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.210144   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.210924   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.212786   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:08:44.537110  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:47.036204  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:44.717236  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:44.728891  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:44.728977  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:44.753976  620795 cri.go:89] found id: ""
	I1213 12:08:44.754000  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.754008  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:44.754018  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:44.754078  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:44.786705  620795 cri.go:89] found id: ""
	I1213 12:08:44.786732  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.786741  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:44.786748  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:44.786806  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:44.822299  620795 cri.go:89] found id: ""
	I1213 12:08:44.822328  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.822337  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:44.822345  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:44.822401  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:44.856823  620795 cri.go:89] found id: ""
	I1213 12:08:44.856856  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.856867  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:44.856873  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:44.856930  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:44.882589  620795 cri.go:89] found id: ""
	I1213 12:08:44.882614  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.882623  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:44.882630  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:44.882688  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:44.908466  620795 cri.go:89] found id: ""
	I1213 12:08:44.908491  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.908500  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:44.908507  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:44.908588  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:44.937829  620795 cri.go:89] found id: ""
	I1213 12:08:44.937856  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.937865  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:44.937872  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:44.937927  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:44.963281  620795 cri.go:89] found id: ""
	I1213 12:08:44.963305  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.963315  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:44.963324  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:44.963335  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:44.991410  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:44.991446  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:45.037106  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:45.037139  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:45.136316  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:45.136362  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:45.159600  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:45.159635  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:45.275736  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:45.264960   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.265716   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.268688   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.269926   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.271240   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:45.264960   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.265716   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.268688   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.269926   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.271240   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:47.775978  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:47.794424  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:47.794535  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:47.822730  620795 cri.go:89] found id: ""
	I1213 12:08:47.822773  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.822782  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:47.822794  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:47.822874  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:47.855882  620795 cri.go:89] found id: ""
	I1213 12:08:47.855909  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.855921  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:47.855928  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:47.855992  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:47.880824  620795 cri.go:89] found id: ""
	I1213 12:08:47.880849  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.880863  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:47.880870  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:47.880944  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:47.905536  620795 cri.go:89] found id: ""
	I1213 12:08:47.905558  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.905567  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:47.905573  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:47.905627  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:47.930629  620795 cri.go:89] found id: ""
	I1213 12:08:47.930651  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.930660  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:47.930666  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:47.930722  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:47.963310  620795 cri.go:89] found id: ""
	I1213 12:08:47.963340  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.963348  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:47.963355  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:47.963416  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:47.988259  620795 cri.go:89] found id: ""
	I1213 12:08:47.988284  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.988293  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:47.988300  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:47.988363  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:48.016297  620795 cri.go:89] found id: ""
	I1213 12:08:48.016324  620795 logs.go:282] 0 containers: []
	W1213 12:08:48.016334  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:48.016344  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:48.016358  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:48.036992  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:48.037157  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:48.110165  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:48.102261   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.102875   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.104540   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.105094   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.106601   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:48.102261   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.102875   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.104540   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.105094   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.106601   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:48.110186  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:48.110199  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:48.138855  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:48.138892  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:48.167128  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:48.167162  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:08:49.537098  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:52.036223  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:50.735817  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:50.746548  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:50.746616  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:50.775549  620795 cri.go:89] found id: ""
	I1213 12:08:50.775575  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.775585  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:50.775591  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:50.775646  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:50.804612  620795 cri.go:89] found id: ""
	I1213 12:08:50.804635  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.804644  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:50.804650  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:50.804705  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:50.837625  620795 cri.go:89] found id: ""
	I1213 12:08:50.837650  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.837659  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:50.837665  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:50.837720  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:50.864589  620795 cri.go:89] found id: ""
	I1213 12:08:50.864612  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.864620  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:50.864627  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:50.864687  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:50.889551  620795 cri.go:89] found id: ""
	I1213 12:08:50.889575  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.889583  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:50.889589  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:50.889646  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:50.919224  620795 cri.go:89] found id: ""
	I1213 12:08:50.919247  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.919255  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:50.919261  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:50.919317  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:50.944422  620795 cri.go:89] found id: ""
	I1213 12:08:50.944495  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.944574  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:50.944612  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:50.944696  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:50.970021  620795 cri.go:89] found id: ""
	I1213 12:08:50.970086  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.970109  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:50.970132  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:50.970163  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:50.986872  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:50.986906  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:51.060506  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:51.052011   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.052816   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.054613   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.055181   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.056812   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:51.052011   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.052816   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.054613   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.055181   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.056812   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:51.060540  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:51.060552  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:51.092480  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:51.092521  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:51.123102  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:51.123131  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:53.694152  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:53.705704  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:53.705773  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:53.731245  620795 cri.go:89] found id: ""
	I1213 12:08:53.731268  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.731276  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:53.731282  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:53.731340  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:53.757925  620795 cri.go:89] found id: ""
	I1213 12:08:53.757957  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.757966  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:53.757973  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:53.758036  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:53.808536  620795 cri.go:89] found id: ""
	I1213 12:08:53.808559  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.808568  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:53.808575  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:53.808635  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:53.840078  620795 cri.go:89] found id: ""
	I1213 12:08:53.840112  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.840122  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:53.840129  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:53.840189  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:53.865894  620795 cri.go:89] found id: ""
	I1213 12:08:53.865917  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.865927  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:53.865933  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:53.865993  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:53.891498  620795 cri.go:89] found id: ""
	I1213 12:08:53.891542  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.891551  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:53.891558  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:53.891621  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:53.917936  620795 cri.go:89] found id: ""
	I1213 12:08:53.917959  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.917968  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:53.917974  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:53.918032  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:53.943098  620795 cri.go:89] found id: ""
	I1213 12:08:53.943169  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.943193  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:53.943215  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:53.943252  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:53.971597  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:53.971637  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:54.002508  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:54.002540  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:54.080813  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:54.080899  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:54.109629  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:54.109659  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:54.177694  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:54.170109   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.170817   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.172367   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.172694   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.174239   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:54.170109   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.170817   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.172367   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.172694   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.174239   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:08:54.036977  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:56.537074  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:56.677966  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:56.688667  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:56.688741  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:56.713668  620795 cri.go:89] found id: ""
	I1213 12:08:56.713690  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.713699  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:56.713706  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:56.713762  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:56.741202  620795 cri.go:89] found id: ""
	I1213 12:08:56.741227  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.741236  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:56.741242  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:56.741339  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:56.768922  620795 cri.go:89] found id: ""
	I1213 12:08:56.768942  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.768950  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:56.768957  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:56.769013  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:56.797125  620795 cri.go:89] found id: ""
	I1213 12:08:56.797148  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.797157  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:56.797164  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:56.797218  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:56.824672  620795 cri.go:89] found id: ""
	I1213 12:08:56.824695  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.824703  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:56.824709  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:56.824763  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:56.849420  620795 cri.go:89] found id: ""
	I1213 12:08:56.849446  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.849455  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:56.849462  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:56.849516  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:56.875118  620795 cri.go:89] found id: ""
	I1213 12:08:56.875143  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.875152  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:56.875158  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:56.875213  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:56.900386  620795 cri.go:89] found id: ""
	I1213 12:08:56.900411  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.900420  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:56.900434  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:56.900446  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:56.966130  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:56.966167  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:56.982745  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:56.982773  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:57.073125  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:57.063683   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.064467   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.066129   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.066624   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.068003   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:57.063683   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.064467   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.066129   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.066624   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.068003   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:57.073146  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:57.073165  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:57.104552  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:57.104585  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:59.636110  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:59.649509  620795 out.go:203] 
	W1213 12:08:59.652376  620795 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 12:08:59.652409  620795 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 12:08:59.652418  620795 out.go:285] * Related issues:
	W1213 12:08:59.652431  620795 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 12:08:59.652444  620795 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 12:08:59.655226  620795 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.494217646Z" level=info msg="Using the internal default seccomp profile"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.494225302Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.494232317Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.49423788Z" level=info msg="RDT not available in the host system"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.49425041Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.495095045Z" level=info msg="Conmon does support the --sync option"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.495116264Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.495131451Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.495779293Z" level=info msg="Conmon does support the --sync option"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.49580189Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.495943824Z" level=info msg="Updated default CNI network name to "
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.496641734Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci
/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_m
emory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_di
r = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [cr
io.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.497083731Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.497162501Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560451228Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560494723Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.56056025Z" level=info msg="Create NRI interface"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560660304Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560668633Z" level=info msg="runtime interface created"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.56068309Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560689769Z" level=info msg="runtime interface starting up..."
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560695849Z" level=info msg="starting plugins..."
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560708797Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560776564Z" level=info msg="No systemd watchdog enabled"
	Dec 13 12:02:55 newest-cni-800979 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:09:03.337575   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:09:03.338311   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:09:03.339911   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:09:03.340482   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:09:03.342075   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	[Dec13 11:46] overlayfs: idmapped layers are currently not supported
	[ +24.639766] overlayfs: idmapped layers are currently not supported
	[ +18.732422] overlayfs: idmapped layers are currently not supported
	[Dec13 11:47] overlayfs: idmapped layers are currently not supported
	[Dec13 11:48] overlayfs: idmapped layers are currently not supported
	[Dec13 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.618483] overlayfs: idmapped layers are currently not supported
	[Dec13 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.749488] overlayfs: idmapped layers are currently not supported
	[Dec13 11:52] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 12:09:03 up  3:51,  0 user,  load average: 0.61, 0.76, 1.21
	Linux newest-cni-800979 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 12:09:00 newest-cni-800979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:09:01 newest-cni-800979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 13 12:09:01 newest-cni-800979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:01 newest-cni-800979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:01 newest-cni-800979 kubelet[13360]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:01 newest-cni-800979 kubelet[13360]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:01 newest-cni-800979 kubelet[13360]: E1213 12:09:01.587951   13360 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:09:01 newest-cni-800979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:09:01 newest-cni-800979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:09:02 newest-cni-800979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 13 12:09:02 newest-cni-800979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:02 newest-cni-800979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:02 newest-cni-800979 kubelet[13374]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:02 newest-cni-800979 kubelet[13374]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:02 newest-cni-800979 kubelet[13374]: E1213 12:09:02.339328   13374 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:09:02 newest-cni-800979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:09:02 newest-cni-800979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:09:03 newest-cni-800979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 488.
	Dec 13 12:09:03 newest-cni-800979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:03 newest-cni-800979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:03 newest-cni-800979 kubelet[13400]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:03 newest-cni-800979 kubelet[13400]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:03 newest-cni-800979 kubelet[13400]: E1213 12:09:03.085570   13400 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:09:03 newest-cni-800979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:09:03 newest-cni-800979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-800979 -n newest-cni-800979
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-800979 -n newest-cni-800979: exit status 2 (399.375547ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-800979" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (376.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (375.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1213 12:03:05.575311  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:04:06.639911  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:05:44.683094  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:07:00.470556  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:07:27.930305  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:08:05.574876  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 80 (6m8.573473844s)

                                                
                                                
-- stdout --
	* [no-preload-307409] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-307409" primary control-plane node in "no-preload-307409" cluster
	* Pulling base image v0.0.48-1765275396-22083 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 12:03:03.050063  622913 out.go:360] Setting OutFile to fd 1 ...
	I1213 12:03:03.050285  622913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:03:03.050312  622913 out.go:374] Setting ErrFile to fd 2...
	I1213 12:03:03.050330  622913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:03:03.050625  622913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 12:03:03.051085  622913 out.go:368] Setting JSON to false
	I1213 12:03:03.052120  622913 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13535,"bootTime":1765613848,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 12:03:03.052229  622913 start.go:143] virtualization:  
	I1213 12:03:03.055383  622913 out.go:179] * [no-preload-307409] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 12:03:03.059239  622913 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 12:03:03.059332  622913 notify.go:221] Checking for updates...
	I1213 12:03:03.064728  622913 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 12:03:03.067859  622913 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:03:03.070706  622913 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 12:03:03.073576  622913 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 12:03:03.076392  622913 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 12:03:03.079655  622913 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:03:03.080246  622913 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 12:03:03.113231  622913 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 12:03:03.113356  622913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:03:03.174414  622913 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 12:03:03.164880125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:03:03.174536  622913 docker.go:319] overlay module found
	I1213 12:03:03.177638  622913 out.go:179] * Using the docker driver based on existing profile
	I1213 12:03:03.180320  622913 start.go:309] selected driver: docker
	I1213 12:03:03.180343  622913 start.go:927] validating driver "docker" against &{Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:03:03.180449  622913 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 12:03:03.181174  622913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:03:03.236517  622913 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 12:03:03.227319129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:03:03.236860  622913 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 12:03:03.236895  622913 cni.go:84] Creating CNI manager for ""
	I1213 12:03:03.236967  622913 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 12:03:03.237012  622913 start.go:353] cluster config:
	{Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:03:03.241932  622913 out.go:179] * Starting "no-preload-307409" primary control-plane node in "no-preload-307409" cluster
	I1213 12:03:03.244777  622913 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 12:03:03.247722  622913 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 12:03:03.250567  622913 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 12:03:03.250698  622913 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 12:03:03.250725  622913 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/config.json ...
	I1213 12:03:03.251056  622913 cache.go:107] acquiring lock: {Name:mkf4d74369c8245ecb55fb0e29b8225ca9f09ff5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251142  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 12:03:03.251161  622913 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 117.655µs
	I1213 12:03:03.251175  622913 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 12:03:03.251192  622913 cache.go:107] acquiring lock: {Name:mkb6b336872403a4d868a5d769900fdf1066c1c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251240  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1213 12:03:03.251249  622913 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 59.291µs
	I1213 12:03:03.251256  622913 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 12:03:03.251279  622913 cache.go:107] acquiring lock: {Name:mkafdfd911f389f1e02c51849a66241927a5c213 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251318  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1213 12:03:03.251329  622913 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 50.749µs
	I1213 12:03:03.251341  622913 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 12:03:03.251360  622913 cache.go:107] acquiring lock: {Name:mk8f79409d2ca53ad062fcf0126f6980a6193bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251395  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1213 12:03:03.251406  622913 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 49.043µs
	I1213 12:03:03.251413  622913 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 12:03:03.251422  622913 cache.go:107] acquiring lock: {Name:mk2037397f0606151b65f1037a4650bdb91f57be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251455  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1213 12:03:03.251465  622913 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 43.717µs
	I1213 12:03:03.251472  622913 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1213 12:03:03.251481  622913 cache.go:107] acquiring lock: {Name:mkcce925699bd9689e329c60f570e109b24fe773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251564  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1213 12:03:03.251578  622913 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 97.437µs
	I1213 12:03:03.251585  622913 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1213 12:03:03.251596  622913 cache.go:107] acquiring lock: {Name:mk7409e8a480c483310652cd8f23d5f9940a03a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251632  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1213 12:03:03.251642  622913 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 47.82µs
	I1213 12:03:03.251649  622913 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1213 12:03:03.251673  622913 cache.go:107] acquiring lock: {Name:mk4ff965cf9ab0943f63cb9d5079b89d443629ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251707  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1213 12:03:03.251716  622913 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 48.632µs
	I1213 12:03:03.251723  622913 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1213 12:03:03.251729  622913 cache.go:87] Successfully saved all images to host disk.
	I1213 12:03:03.282338  622913 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 12:03:03.282369  622913 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 12:03:03.282443  622913 cache.go:243] Successfully downloaded all kic artifacts
	I1213 12:03:03.282477  622913 start.go:360] acquireMachinesLock for no-preload-307409: {Name:mk5b591d9d6f446a65ecf56605831e84fbfd4c88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.282544  622913 start.go:364] duration metric: took 41.937µs to acquireMachinesLock for "no-preload-307409"
	I1213 12:03:03.282565  622913 start.go:96] Skipping create...Using existing machine configuration
	I1213 12:03:03.282570  622913 fix.go:54] fixHost starting: 
	I1213 12:03:03.282851  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:03.304419  622913 fix.go:112] recreateIfNeeded on no-preload-307409: state=Stopped err=<nil>
	W1213 12:03:03.304448  622913 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 12:03:03.307872  622913 out.go:252] * Restarting existing docker container for "no-preload-307409" ...
	I1213 12:03:03.307964  622913 cli_runner.go:164] Run: docker start no-preload-307409
	I1213 12:03:03.599368  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:03.618935  622913 kic.go:430] container "no-preload-307409" state is running.
	I1213 12:03:03.619319  622913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 12:03:03.641333  622913 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/config.json ...
	I1213 12:03:03.641563  622913 machine.go:94] provisionDockerMachine start ...
	I1213 12:03:03.641633  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:03.663338  622913 main.go:143] libmachine: Using SSH client type: native
	I1213 12:03:03.663870  622913 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1213 12:03:03.663890  622913 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 12:03:03.664580  622913 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 12:03:06.819092  622913 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-307409
	
	I1213 12:03:06.819117  622913 ubuntu.go:182] provisioning hostname "no-preload-307409"
	I1213 12:03:06.819201  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:06.837856  622913 main.go:143] libmachine: Using SSH client type: native
	I1213 12:03:06.838181  622913 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1213 12:03:06.838198  622913 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-307409 && echo "no-preload-307409" | sudo tee /etc/hostname
	I1213 12:03:06.997122  622913 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-307409
	
	I1213 12:03:06.997203  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:07.016669  622913 main.go:143] libmachine: Using SSH client type: native
	I1213 12:03:07.017014  622913 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1213 12:03:07.017037  622913 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-307409' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-307409/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-307409' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 12:03:07.176125  622913 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 12:03:07.176151  622913 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 12:03:07.176182  622913 ubuntu.go:190] setting up certificates
	I1213 12:03:07.176201  622913 provision.go:84] configureAuth start
	I1213 12:03:07.176265  622913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 12:03:07.193873  622913 provision.go:143] copyHostCerts
	I1213 12:03:07.193961  622913 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 12:03:07.193973  622913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 12:03:07.194049  622913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 12:03:07.194164  622913 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 12:03:07.194175  622913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 12:03:07.194205  622913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 12:03:07.194267  622913 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 12:03:07.194275  622913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 12:03:07.194298  622913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 12:03:07.194346  622913 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.no-preload-307409 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-307409]
	I1213 12:03:07.397856  622913 provision.go:177] copyRemoteCerts
	I1213 12:03:07.397930  622913 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 12:03:07.397969  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:07.415003  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:07.523762  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 12:03:07.541934  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 12:03:07.560353  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 12:03:07.577524  622913 provision.go:87] duration metric: took 401.305633ms to configureAuth
	I1213 12:03:07.577567  622913 ubuntu.go:206] setting minikube options for container-runtime
	I1213 12:03:07.577753  622913 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:03:07.577860  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:07.595178  622913 main.go:143] libmachine: Using SSH client type: native
	I1213 12:03:07.595492  622913 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1213 12:03:07.595506  622913 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 12:03:07.957883  622913 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 12:03:07.957909  622913 machine.go:97] duration metric: took 4.316335928s to provisionDockerMachine
	I1213 12:03:07.957921  622913 start.go:293] postStartSetup for "no-preload-307409" (driver="docker")
	I1213 12:03:07.957933  622913 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 12:03:07.958002  622913 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 12:03:07.958068  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:07.976949  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:08.088342  622913 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 12:03:08.091929  622913 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 12:03:08.092010  622913 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 12:03:08.092029  622913 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 12:03:08.092100  622913 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 12:03:08.092225  622913 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 12:03:08.092336  622913 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 12:03:08.100328  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 12:03:08.119806  622913 start.go:296] duration metric: took 161.868607ms for postStartSetup
	I1213 12:03:08.119893  622913 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 12:03:08.119935  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:08.137272  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:08.240715  622913 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 12:03:08.245595  622913 fix.go:56] duration metric: took 4.963017027s for fixHost
	I1213 12:03:08.245624  622913 start.go:83] releasing machines lock for "no-preload-307409", held for 4.963070517s
	I1213 12:03:08.245713  622913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 12:03:08.262782  622913 ssh_runner.go:195] Run: cat /version.json
	I1213 12:03:08.262844  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:08.263126  622913 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 12:03:08.263189  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:08.283140  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:08.296409  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:08.391353  622913 ssh_runner.go:195] Run: systemctl --version
	I1213 12:03:08.484408  622913 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 12:03:08.531460  622913 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 12:03:08.537034  622913 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 12:03:08.537102  622913 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 12:03:08.548165  622913 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 12:03:08.548229  622913 start.go:496] detecting cgroup driver to use...
	I1213 12:03:08.548280  622913 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 12:03:08.548375  622913 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 12:03:08.564936  622913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 12:03:08.579568  622913 docker.go:218] disabling cri-docker service (if available) ...
	I1213 12:03:08.579670  622913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 12:03:08.596861  622913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 12:03:08.610443  622913 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 12:03:08.718052  622913 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 12:03:08.841997  622913 docker.go:234] disabling docker service ...
	I1213 12:03:08.842083  622913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 12:03:08.857246  622913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 12:03:08.871656  622913 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 12:03:09.021847  622913 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 12:03:09.148277  622913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 12:03:09.162720  622913 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 12:03:09.178582  622913 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 12:03:09.178712  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.188481  622913 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 12:03:09.188600  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.198182  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.207488  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.217314  622913 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 12:03:09.225728  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.234602  622913 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.243163  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.251840  622913 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 12:03:09.261376  622913 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 12:03:09.269241  622913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:03:09.408118  622913 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 12:03:09.582010  622913 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 12:03:09.582116  622913 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 12:03:09.586129  622913 start.go:564] Will wait 60s for crictl version
	I1213 12:03:09.586218  622913 ssh_runner.go:195] Run: which crictl
	I1213 12:03:09.589880  622913 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 12:03:09.617198  622913 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 12:03:09.617307  622913 ssh_runner.go:195] Run: crio --version
	I1213 12:03:09.648039  622913 ssh_runner.go:195] Run: crio --version
	I1213 12:03:09.680132  622913 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 12:03:09.683104  622913 cli_runner.go:164] Run: docker network inspect no-preload-307409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 12:03:09.699119  622913 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 12:03:09.703132  622913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:03:09.712888  622913 kubeadm.go:884] updating cluster {Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 12:03:09.713027  622913 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 12:03:09.713074  622913 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 12:03:09.749883  622913 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 12:03:09.749906  622913 cache_images.go:86] Images are preloaded, skipping loading
	I1213 12:03:09.749914  622913 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 12:03:09.750028  622913 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-307409 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 12:03:09.750104  622913 ssh_runner.go:195] Run: crio config
	I1213 12:03:09.812957  622913 cni.go:84] Creating CNI manager for ""
	I1213 12:03:09.812981  622913 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 12:03:09.813006  622913 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 12:03:09.813030  622913 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-307409 NodeName:no-preload-307409 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 12:03:09.813160  622913 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-307409"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 12:03:09.813240  622913 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 12:03:09.821482  622913 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 12:03:09.821552  622913 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 12:03:09.830108  622913 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 12:03:09.842772  622913 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 12:03:09.855539  622913 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 12:03:09.868438  622913 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 12:03:09.871940  622913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:03:09.881527  622913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:03:09.994807  622913 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:03:10.018299  622913 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409 for IP: 192.168.85.2
	I1213 12:03:10.018324  622913 certs.go:195] generating shared ca certs ...
	I1213 12:03:10.018341  622913 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:03:10.018485  622913 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 12:03:10.018546  622913 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 12:03:10.018560  622913 certs.go:257] generating profile certs ...
	I1213 12:03:10.018675  622913 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key
	I1213 12:03:10.018739  622913 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b
	I1213 12:03:10.018788  622913 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key
	I1213 12:03:10.018902  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 12:03:10.018945  622913 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 12:03:10.018958  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 12:03:10.018984  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 12:03:10.019011  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 12:03:10.019049  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 12:03:10.019107  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 12:03:10.019800  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 12:03:10.070011  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 12:03:10.106991  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 12:03:10.124508  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 12:03:10.141854  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 12:03:10.159596  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 12:03:10.177143  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 12:03:10.193680  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 12:03:10.212540  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 12:03:10.230850  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 12:03:10.247982  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 12:03:10.265265  622913 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 12:03:10.280828  622913 ssh_runner.go:195] Run: openssl version
	I1213 12:03:10.287915  622913 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:03:10.295295  622913 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 12:03:10.302777  622913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:03:10.306712  622913 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:03:10.306788  622913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:03:10.347657  622913 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 12:03:10.355488  622913 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 12:03:10.362741  622913 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 12:03:10.370213  622913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 12:03:10.373963  622913 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 12:03:10.374024  622913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 12:03:10.415846  622913 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 12:03:10.423114  622913 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 12:03:10.430238  622913 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 12:03:10.437700  622913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 12:03:10.441526  622913 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 12:03:10.441626  622913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 12:03:10.482660  622913 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 12:03:10.490193  622913 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 12:03:10.493922  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 12:03:10.537559  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 12:03:10.580339  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 12:03:10.624474  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 12:03:10.668005  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 12:03:10.719243  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 12:03:10.787031  622913 kubeadm.go:401] StartCluster: {Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:03:10.787127  622913 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 12:03:10.787194  622913 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 12:03:10.866441  622913 cri.go:89] found id: ""
	I1213 12:03:10.866517  622913 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 12:03:10.878947  622913 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 12:03:10.878971  622913 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 12:03:10.879029  622913 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 12:03:10.887787  622913 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 12:03:10.888361  622913 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-307409" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:03:10.888611  622913 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-354468/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-307409" cluster setting kubeconfig missing "no-preload-307409" context setting]
	I1213 12:03:10.889058  622913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:03:10.890426  622913 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 12:03:10.898823  622913 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 12:03:10.898859  622913 kubeadm.go:602] duration metric: took 19.881679ms to restartPrimaryControlPlane
	I1213 12:03:10.898869  622913 kubeadm.go:403] duration metric: took 111.848044ms to StartCluster
	I1213 12:03:10.898903  622913 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:03:10.899000  622913 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:03:10.900707  622913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:03:10.900965  622913 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 12:03:10.901208  622913 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:03:10.901250  622913 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 12:03:10.901316  622913 addons.go:70] Setting storage-provisioner=true in profile "no-preload-307409"
	I1213 12:03:10.901329  622913 addons.go:239] Setting addon storage-provisioner=true in "no-preload-307409"
	I1213 12:03:10.901354  622913 host.go:66] Checking if "no-preload-307409" exists ...
	I1213 12:03:10.901796  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:10.902330  622913 addons.go:70] Setting dashboard=true in profile "no-preload-307409"
	I1213 12:03:10.902349  622913 addons.go:239] Setting addon dashboard=true in "no-preload-307409"
	W1213 12:03:10.902356  622913 addons.go:248] addon dashboard should already be in state true
	I1213 12:03:10.902383  622913 host.go:66] Checking if "no-preload-307409" exists ...
	I1213 12:03:10.902788  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:10.906749  622913 addons.go:70] Setting default-storageclass=true in profile "no-preload-307409"
	I1213 12:03:10.907002  622913 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-307409"
	I1213 12:03:10.907925  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:10.908085  622913 out.go:179] * Verifying Kubernetes components...
	I1213 12:03:10.911613  622913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:03:10.936135  622913 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 12:03:10.936200  622913 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 12:03:10.939926  622913 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 12:03:10.940040  622913 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:10.940057  622913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 12:03:10.940121  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:10.942800  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 12:03:10.942825  622913 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 12:03:10.942890  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:10.947265  622913 addons.go:239] Setting addon default-storageclass=true in "no-preload-307409"
	I1213 12:03:10.947306  622913 host.go:66] Checking if "no-preload-307409" exists ...
	I1213 12:03:10.947819  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:11.005750  622913 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 12:03:11.005772  622913 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 12:03:11.005782  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:11.005838  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:11.023641  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:11.041145  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:11.111003  622913 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:03:11.173593  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:11.173636  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 12:03:11.173654  622913 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 12:03:11.188163  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 12:03:11.188185  622913 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 12:03:11.213443  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 12:03:11.213508  622913 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 12:03:11.227236  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 12:03:11.230811  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 12:03:11.230883  622913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 12:03:11.251133  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 12:03:11.251205  622913 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 12:03:11.292200  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 12:03:11.292226  622913 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 12:03:11.305259  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 12:03:11.305283  622913 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 12:03:11.318210  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 12:03:11.318236  622913 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 12:03:11.331855  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:03:11.331882  622913 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 12:03:11.346399  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:11.535442  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:11.535581  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.535629  622913 retry.go:31] will retry after 290.823808ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.535633  622913 retry.go:31] will retry after 252.781045ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.535694  622913 node_ready.go:35] waiting up to 6m0s for node "no-preload-307409" to be "Ready" ...
	W1213 12:03:11.536032  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.536057  622913 retry.go:31] will retry after 294.061208ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.788663  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:11.827131  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 12:03:11.830443  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:11.858572  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.858608  622913 retry.go:31] will retry after 534.111043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:11.903268  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.903302  622913 retry.go:31] will retry after 517.641227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:11.928403  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.928440  622913 retry.go:31] will retry after 261.246628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.190196  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:12.253861  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.253905  622913 retry.go:31] will retry after 750.097801ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.392854  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:12.421390  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:12.466046  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.466119  622913 retry.go:31] will retry after 345.117349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:12.494512  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.494543  622913 retry.go:31] will retry after 582.433152ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.811477  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:12.872208  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.872254  622913 retry.go:31] will retry after 1.066115266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.004542  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:03:13.077848  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:13.142906  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.142942  622913 retry.go:31] will retry after 477.26404ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:13.177073  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.177107  622913 retry.go:31] will retry after 558.594273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:13.536929  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:13.621309  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:13.684925  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.684962  622913 retry.go:31] will retry after 887.0827ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.735891  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:13.838454  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.838488  622913 retry.go:31] will retry after 1.840863262s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.938866  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:13.997740  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.997780  622913 retry.go:31] will retry after 1.50758238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:14.572279  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:14.649792  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:14.649830  622913 retry.go:31] will retry after 2.273525411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:15.505555  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:15.537094  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:15.566161  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:15.566200  622913 retry.go:31] will retry after 1.268984334s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:15.680410  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:15.739773  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:15.739804  622913 retry.go:31] will retry after 2.516127735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.835378  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:16.919361  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.919396  622913 retry.go:31] will retry after 2.060639493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.923603  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:16.987685  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.987717  622913 retry.go:31] will retry after 3.014723999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:18.037172  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:18.256769  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:18.385179  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:18.385215  622913 retry.go:31] will retry after 1.545787463s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:18.980290  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:19.083283  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:19.083326  622913 retry.go:31] will retry after 3.363160165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:19.931900  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:19.994541  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:19.994572  622913 retry.go:31] will retry after 3.448577935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:20.003109  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:20.075345  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:20.075383  622913 retry.go:31] will retry after 2.247696448s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:20.536209  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:22.323733  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:22.390042  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:22.390078  622913 retry.go:31] will retry after 4.701837343s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:22.447431  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:22.510069  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:22.510101  622913 retry.go:31] will retry after 8.996063036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:22.536655  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:23.443398  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:23.501606  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:23.501640  622913 retry.go:31] will retry after 3.90534406s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:24.537114  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:27.036285  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:27.092481  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:27.162031  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:27.162065  622913 retry.go:31] will retry after 11.355394108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:27.407221  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:27.478522  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:27.478557  622913 retry.go:31] will retry after 8.009668822s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:29.537044  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:31.506350  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:31.537137  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:31.567063  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:31.567101  622913 retry.go:31] will retry after 5.348365924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:33.537277  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:35.488997  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:35.615701  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:35.615734  622913 retry.go:31] will retry after 18.593547057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:36.036633  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:36.916463  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:36.985838  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:36.985870  622913 retry.go:31] will retry after 7.879856322s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:38.518385  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:38.536542  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:38.629558  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:38.629596  622913 retry.go:31] will retry after 11.083764817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:40.537112  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:43.037066  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:44.866836  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:44.926788  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:44.926822  622913 retry.go:31] will retry after 12.537177434s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:45.536544  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:47.537056  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:49.714461  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:49.810126  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:49.810163  622913 retry.go:31] will retry after 17.034686012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:50.037110  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:52.537099  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:54.210466  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:54.276658  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:54.276693  622913 retry.go:31] will retry after 15.477790737s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:55.037124  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:57.464704  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:57.536423  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:57.546896  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:57.546941  622913 retry.go:31] will retry after 45.136010492s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:00.036301  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:02.037075  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:04.537094  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:06.845581  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:04:06.913058  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:06.913091  622913 retry.go:31] will retry after 30.701510805s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:07.036960  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:09.536141  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:09.755504  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:04:09.840522  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:09.840549  622913 retry.go:31] will retry after 18.501787354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:11.536619  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:14.037178  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:16.536405  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:18.537035  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:20.537076  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:23.036959  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:25.037157  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:27.037243  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:28.342721  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:04:28.405775  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:28.405881  622913 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 12:04:29.536294  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:31.536581  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:33.536622  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:36.036352  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:37.615506  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:04:37.688522  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:37.688627  622913 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 12:04:38.037102  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:40.037338  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:42.538150  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:42.683554  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:04:42.744769  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:42.744869  622913 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 12:04:42.747993  622913 out.go:179] * Enabled addons: 
	I1213 12:04:42.750740  622913 addons.go:530] duration metric: took 1m31.849485278s for enable addons: enabled=[]
	W1213 12:04:45.037213  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:47.536268  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:50.037029  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:52.037265  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:54.537026  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:56.537082  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:59.037033  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:01.536339  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:03.537034  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:06.036238  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:08.037112  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:10.037152  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:12.536415  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:14.537113  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:17.037129  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:19.537079  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:22.037194  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:24.536202  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:26.537127  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:29.037051  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:31.536823  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:34.036325  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:36.536291  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:39.036349  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:41.036401  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:43.037206  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:45.037527  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:47.536245  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:49.537116  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:52.036281  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:54.536956  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:56.537169  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:58.537235  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:01.037253  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:03.537138  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:05.537188  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:07.537266  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:10.037386  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:12.536953  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:15.036244  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:17.037163  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:19.537261  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:22.037303  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:24.537013  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:27.037104  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:29.537185  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:32.037043  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:34.037106  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:36.037218  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:38.536279  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:40.537278  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:43.037128  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:45.037308  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:47.537265  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:50.037084  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:52.037310  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:54.536621  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:56.537197  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:59.037110  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:01.537092  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:04.037144  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:06.537015  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:09.036994  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:11.537211  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:14.037223  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:16.537169  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:19.037064  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:21.536199  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:23.536309  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:25.537129  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:28.037200  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:30.537006  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:33.036221  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:35.037005  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:37.037071  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:39.536473  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:41.536533  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:44.037002  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:46.037183  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:48.537001  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:50.537083  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:53.037078  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:55.037200  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:57.537019  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:00.039297  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:02.536522  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:05.037001  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:07.037153  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:09.536381  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:11.536495  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:14.037082  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:16.537099  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:19.037143  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:21.537005  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:24.037139  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:26.537138  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:29.036447  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:31.536317  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:33.536995  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:35.537098  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:38.037111  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:40.037886  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:42.536996  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:44.537110  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:47.036204  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:49.537098  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:52.036223  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:54.036977  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:56.537074  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:59.037102  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:09:01.536950  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:09:03.536998  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:09:06.036283  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:09:08.536173  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:09:10.536219  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:09:11.536756  622913 node_ready.go:38] duration metric: took 6m0.001029523s for node "no-preload-307409" to be "Ready" ...
	I1213 12:09:11.540138  622913 out.go:203] 
	W1213 12:09:11.543197  622913 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 12:09:11.543231  622913 out.go:285] * 
	* 
	W1213 12:09:11.545584  622913 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 12:09:11.548648  622913 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-307409
helpers_test.go:244: (dbg) docker inspect no-preload-307409:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a",
	        "Created": "2025-12-13T11:52:23.357834479Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 623056,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T12:03:03.340968033Z",
	            "FinishedAt": "2025-12-13T12:03:01.976500099Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/hosts",
	        "LogPath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a-json.log",
	        "Name": "/no-preload-307409",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-307409:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-307409",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a",
	                "LowerDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-307409",
	                "Source": "/var/lib/docker/volumes/no-preload-307409/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-307409",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-307409",
	                "name.minikube.sigs.k8s.io": "no-preload-307409",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c126f047073986da1996efceb8a3e932bcfa233495a4aa62f7ff0993488c461e",
	            "SandboxKey": "/var/run/docker/netns/c126f0470739",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-307409": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:b6:08:7b:b6:bb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "280e424abad6162e6fbaaf316b3c6095ab0d80a59a1f82eb556a84b2dd4f139a",
	                    "EndpointID": "012a611abbc58ce4e9989db1baedc5a39d41b5ffd347c4e9d8cd59dee05ce5c5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-307409",
	                        "9fe6186bf0c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-307409 -n no-preload-307409
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-307409 -n no-preload-307409: exit status 2 (459.158181ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-307409 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-307409 logs -n 25: (3.129769698s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ stop    │ -p embed-certs-326948 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-326948 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:52 UTC │
	│ image   │ default-k8s-diff-port-151605 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p default-k8s-diff-port-151605 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p disable-driver-mounts-072590                                                                                                                                                                                                                      │ disable-driver-mounts-072590 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ image   │ embed-certs-326948 image list --format=json                                                                                                                                                                                                          │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p embed-certs-326948 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p embed-certs-326948                                                                                                                                                                                                                                │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p embed-certs-326948                                                                                                                                                                                                                                │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p newest-cni-800979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-307409 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:00 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-800979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │                     │
	│ stop    │ -p newest-cni-800979 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:02 UTC │ 13 Dec 25 12:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-800979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:02 UTC │ 13 Dec 25 12:02 UTC │
	│ start   │ -p newest-cni-800979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:02 UTC │                     │
	│ stop    │ -p no-preload-307409 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:03 UTC │ 13 Dec 25 12:03 UTC │
	│ addons  │ enable dashboard -p no-preload-307409 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:03 UTC │ 13 Dec 25 12:03 UTC │
	│ start   │ -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:03 UTC │                     │
	│ image   │ newest-cni-800979 image list --format=json                                                                                                                                                                                                           │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:09 UTC │ 13 Dec 25 12:09 UTC │
	│ pause   │ -p newest-cni-800979 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:09 UTC │ 13 Dec 25 12:09 UTC │
	│ unpause │ -p newest-cni-800979 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:09 UTC │ 13 Dec 25 12:09 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 12:03:03
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 12:03:03.050063  622913 out.go:360] Setting OutFile to fd 1 ...
	I1213 12:03:03.050285  622913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:03:03.050312  622913 out.go:374] Setting ErrFile to fd 2...
	I1213 12:03:03.050330  622913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:03:03.050625  622913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 12:03:03.051085  622913 out.go:368] Setting JSON to false
	I1213 12:03:03.052120  622913 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13535,"bootTime":1765613848,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 12:03:03.052229  622913 start.go:143] virtualization:  
	I1213 12:03:03.055383  622913 out.go:179] * [no-preload-307409] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 12:03:03.059239  622913 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 12:03:03.059332  622913 notify.go:221] Checking for updates...
	I1213 12:03:03.064728  622913 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 12:03:03.067859  622913 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:03:03.070706  622913 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 12:03:03.073576  622913 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 12:03:03.076392  622913 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 12:03:03.079655  622913 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:03:03.080246  622913 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 12:03:03.113231  622913 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 12:03:03.113356  622913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:03:03.174414  622913 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 12:03:03.164880125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:03:03.174536  622913 docker.go:319] overlay module found
	I1213 12:03:03.177638  622913 out.go:179] * Using the docker driver based on existing profile
	I1213 12:03:03.180320  622913 start.go:309] selected driver: docker
	I1213 12:03:03.180343  622913 start.go:927] validating driver "docker" against &{Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:03:03.180449  622913 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 12:03:03.181174  622913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:03:03.236517  622913 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 12:03:03.227319129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:03:03.236860  622913 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 12:03:03.236895  622913 cni.go:84] Creating CNI manager for ""
	I1213 12:03:03.236967  622913 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 12:03:03.237012  622913 start.go:353] cluster config:
	{Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:03:03.241932  622913 out.go:179] * Starting "no-preload-307409" primary control-plane node in "no-preload-307409" cluster
	I1213 12:03:03.244777  622913 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 12:03:03.247722  622913 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 12:03:03.250567  622913 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 12:03:03.250698  622913 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 12:03:03.250725  622913 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/config.json ...
	I1213 12:03:03.251056  622913 cache.go:107] acquiring lock: {Name:mkf4d74369c8245ecb55fb0e29b8225ca9f09ff5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251142  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 12:03:03.251161  622913 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 117.655µs
	I1213 12:03:03.251175  622913 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 12:03:03.251192  622913 cache.go:107] acquiring lock: {Name:mkb6b336872403a4d868a5d769900fdf1066c1c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251240  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1213 12:03:03.251249  622913 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 59.291µs
	I1213 12:03:03.251256  622913 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 12:03:03.251279  622913 cache.go:107] acquiring lock: {Name:mkafdfd911f389f1e02c51849a66241927a5c213 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251318  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1213 12:03:03.251329  622913 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 50.749µs
	I1213 12:03:03.251341  622913 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 12:03:03.251360  622913 cache.go:107] acquiring lock: {Name:mk8f79409d2ca53ad062fcf0126f6980a6193bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251395  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1213 12:03:03.251406  622913 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 49.043µs
	I1213 12:03:03.251413  622913 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 12:03:03.251422  622913 cache.go:107] acquiring lock: {Name:mk2037397f0606151b65f1037a4650bdb91f57be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251455  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1213 12:03:03.251465  622913 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 43.717µs
	I1213 12:03:03.251472  622913 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1213 12:03:03.251481  622913 cache.go:107] acquiring lock: {Name:mkcce925699bd9689e329c60f570e109b24fe773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251564  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1213 12:03:03.251578  622913 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 97.437µs
	I1213 12:03:03.251585  622913 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1213 12:03:03.251596  622913 cache.go:107] acquiring lock: {Name:mk7409e8a480c483310652cd8f23d5f9940a03a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251632  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1213 12:03:03.251642  622913 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 47.82µs
	I1213 12:03:03.251649  622913 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1213 12:03:03.251673  622913 cache.go:107] acquiring lock: {Name:mk4ff965cf9ab0943f63cb9d5079b89d443629ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251707  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1213 12:03:03.251716  622913 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 48.632µs
	I1213 12:03:03.251723  622913 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1213 12:03:03.251729  622913 cache.go:87] Successfully saved all images to host disk.
	I1213 12:03:03.282338  622913 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 12:03:03.282369  622913 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 12:03:03.282443  622913 cache.go:243] Successfully downloaded all kic artifacts
	I1213 12:03:03.282477  622913 start.go:360] acquireMachinesLock for no-preload-307409: {Name:mk5b591d9d6f446a65ecf56605831e84fbfd4c88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.282544  622913 start.go:364] duration metric: took 41.937µs to acquireMachinesLock for "no-preload-307409"
	I1213 12:03:03.282565  622913 start.go:96] Skipping create...Using existing machine configuration
	I1213 12:03:03.282570  622913 fix.go:54] fixHost starting: 
	I1213 12:03:03.282851  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:03.304419  622913 fix.go:112] recreateIfNeeded on no-preload-307409: state=Stopped err=<nil>
	W1213 12:03:03.304448  622913 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 12:02:59.273796  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:02:59.310724  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:02:59.374429  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.374460  620795 retry.go:31] will retry after 1.123869523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.660188  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:02:59.746796  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.746834  620795 retry.go:31] will retry after 827.424249ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.773951  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:02:59.886643  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:02:59.984018  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.984054  620795 retry.go:31] will retry after 1.031600228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:00.289311  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:00.498512  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:00.574703  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:00.609412  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:00.609443  620795 retry.go:31] will retry after 1.594897337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:00.654022  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:00.654055  620795 retry.go:31] will retry after 1.847551508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:00.773391  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:01.016343  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:01.149191  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:01.149241  620795 retry.go:31] will retry after 1.156400239s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:01.273296  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:01.773106  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:02.204552  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:02.273738  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:02.274099  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.274136  620795 retry.go:31] will retry after 1.092655081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.305854  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:02.368964  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.369001  620795 retry.go:31] will retry after 1.680740365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.502311  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:02.587589  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.587627  620795 retry.go:31] will retry after 1.930642019s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.773890  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:03.281133  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:03.367295  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:03.462797  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:03.462834  620795 retry.go:31] will retry after 1.480584037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:03.773095  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:04.050289  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:04.211663  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:04.211692  620795 retry.go:31] will retry after 4.628682765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:03.307872  622913 out.go:252] * Restarting existing docker container for "no-preload-307409" ...
	I1213 12:03:03.307964  622913 cli_runner.go:164] Run: docker start no-preload-307409
	I1213 12:03:03.599368  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:03.618935  622913 kic.go:430] container "no-preload-307409" state is running.
	I1213 12:03:03.619319  622913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 12:03:03.641333  622913 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/config.json ...
	I1213 12:03:03.641563  622913 machine.go:94] provisionDockerMachine start ...
	I1213 12:03:03.641633  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:03.663338  622913 main.go:143] libmachine: Using SSH client type: native
	I1213 12:03:03.663870  622913 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1213 12:03:03.663890  622913 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 12:03:03.664580  622913 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 12:03:06.819092  622913 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-307409
	
	I1213 12:03:06.819117  622913 ubuntu.go:182] provisioning hostname "no-preload-307409"
	I1213 12:03:06.819201  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:06.837856  622913 main.go:143] libmachine: Using SSH client type: native
	I1213 12:03:06.838181  622913 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1213 12:03:06.838198  622913 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-307409 && echo "no-preload-307409" | sudo tee /etc/hostname
	I1213 12:03:06.997122  622913 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-307409
	
	I1213 12:03:06.997203  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:07.016669  622913 main.go:143] libmachine: Using SSH client type: native
	I1213 12:03:07.017014  622913 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1213 12:03:07.017037  622913 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-307409' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-307409/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-307409' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 12:03:07.176125  622913 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 12:03:07.176151  622913 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 12:03:07.176182  622913 ubuntu.go:190] setting up certificates
	I1213 12:03:07.176201  622913 provision.go:84] configureAuth start
	I1213 12:03:07.176265  622913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 12:03:07.193873  622913 provision.go:143] copyHostCerts
	I1213 12:03:07.193961  622913 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 12:03:07.193973  622913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 12:03:07.194049  622913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 12:03:07.194164  622913 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 12:03:07.194175  622913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 12:03:07.194205  622913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 12:03:07.194267  622913 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 12:03:07.194275  622913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 12:03:07.194298  622913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 12:03:07.194346  622913 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.no-preload-307409 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-307409]
	I1213 12:03:07.397856  622913 provision.go:177] copyRemoteCerts
	I1213 12:03:07.397930  622913 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 12:03:07.397969  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:07.415003  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:07.523762  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 12:03:07.541934  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 12:03:07.560353  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 12:03:07.577524  622913 provision.go:87] duration metric: took 401.305633ms to configureAuth
	I1213 12:03:07.577567  622913 ubuntu.go:206] setting minikube options for container-runtime
	I1213 12:03:07.577753  622913 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:03:07.577860  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:07.595178  622913 main.go:143] libmachine: Using SSH client type: native
	I1213 12:03:07.595492  622913 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1213 12:03:07.595506  622913 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 12:03:07.957883  622913 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 12:03:07.957909  622913 machine.go:97] duration metric: took 4.316335928s to provisionDockerMachine
	I1213 12:03:07.957921  622913 start.go:293] postStartSetup for "no-preload-307409" (driver="docker")
	I1213 12:03:07.957933  622913 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 12:03:07.958002  622913 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 12:03:07.958068  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:07.976949  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:04.273235  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:04.518978  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:04.583937  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:04.583972  620795 retry.go:31] will retry after 4.359648713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:04.773380  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:04.944170  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:05.011259  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:05.011298  620795 retry.go:31] will retry after 2.730254551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:05.273717  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:05.773164  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:06.274023  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:06.773331  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:07.273766  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:07.742621  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:07.773999  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:07.885064  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:07.885095  620795 retry.go:31] will retry after 5.399825259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:08.273766  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:08.773645  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:08.841141  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:08.935930  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:08.935967  620795 retry.go:31] will retry after 8.567303782s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:08.944298  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:09.032112  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:09.032154  620795 retry.go:31] will retry after 7.715566724s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:08.088342  622913 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 12:03:08.091929  622913 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 12:03:08.092010  622913 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 12:03:08.092029  622913 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 12:03:08.092100  622913 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 12:03:08.092225  622913 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 12:03:08.092336  622913 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 12:03:08.100328  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 12:03:08.119806  622913 start.go:296] duration metric: took 161.868607ms for postStartSetup
	I1213 12:03:08.119893  622913 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 12:03:08.119935  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:08.137272  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:08.240715  622913 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 12:03:08.245595  622913 fix.go:56] duration metric: took 4.963017027s for fixHost
	I1213 12:03:08.245624  622913 start.go:83] releasing machines lock for "no-preload-307409", held for 4.963070517s
	I1213 12:03:08.245713  622913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 12:03:08.262782  622913 ssh_runner.go:195] Run: cat /version.json
	I1213 12:03:08.262844  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:08.263126  622913 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 12:03:08.263189  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:08.283140  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:08.296409  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:08.391353  622913 ssh_runner.go:195] Run: systemctl --version
	I1213 12:03:08.484408  622913 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 12:03:08.531460  622913 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 12:03:08.537034  622913 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 12:03:08.537102  622913 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 12:03:08.548165  622913 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 12:03:08.548229  622913 start.go:496] detecting cgroup driver to use...
	I1213 12:03:08.548280  622913 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 12:03:08.548375  622913 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 12:03:08.564936  622913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 12:03:08.579568  622913 docker.go:218] disabling cri-docker service (if available) ...
	I1213 12:03:08.579670  622913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 12:03:08.596861  622913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 12:03:08.610443  622913 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 12:03:08.718052  622913 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 12:03:08.841997  622913 docker.go:234] disabling docker service ...
	I1213 12:03:08.842083  622913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 12:03:08.857246  622913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 12:03:08.871656  622913 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 12:03:09.021847  622913 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 12:03:09.148277  622913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 12:03:09.162720  622913 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 12:03:09.178582  622913 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 12:03:09.178712  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.188481  622913 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 12:03:09.188600  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.198182  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.207488  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.217314  622913 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 12:03:09.225728  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.234602  622913 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.243163  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.251840  622913 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 12:03:09.261376  622913 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 12:03:09.269241  622913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:03:09.408118  622913 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 12:03:09.582010  622913 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 12:03:09.582116  622913 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 12:03:09.586129  622913 start.go:564] Will wait 60s for crictl version
	I1213 12:03:09.586218  622913 ssh_runner.go:195] Run: which crictl
	I1213 12:03:09.589880  622913 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 12:03:09.617198  622913 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 12:03:09.617307  622913 ssh_runner.go:195] Run: crio --version
	I1213 12:03:09.648039  622913 ssh_runner.go:195] Run: crio --version
	I1213 12:03:09.680132  622913 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 12:03:09.683104  622913 cli_runner.go:164] Run: docker network inspect no-preload-307409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 12:03:09.699119  622913 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 12:03:09.703132  622913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:03:09.712888  622913 kubeadm.go:884] updating cluster {Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 12:03:09.713027  622913 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 12:03:09.713074  622913 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 12:03:09.749883  622913 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 12:03:09.749906  622913 cache_images.go:86] Images are preloaded, skipping loading
	I1213 12:03:09.749914  622913 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 12:03:09.750028  622913 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-307409 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 12:03:09.750104  622913 ssh_runner.go:195] Run: crio config
	I1213 12:03:09.812957  622913 cni.go:84] Creating CNI manager for ""
	I1213 12:03:09.812981  622913 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 12:03:09.813006  622913 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 12:03:09.813030  622913 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-307409 NodeName:no-preload-307409 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 12:03:09.813160  622913 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-307409"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 12:03:09.813240  622913 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 12:03:09.821482  622913 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 12:03:09.821552  622913 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 12:03:09.830108  622913 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 12:03:09.842772  622913 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 12:03:09.855539  622913 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 12:03:09.868438  622913 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 12:03:09.871940  622913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:03:09.881527  622913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:03:09.994807  622913 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:03:10.018299  622913 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409 for IP: 192.168.85.2
	I1213 12:03:10.018324  622913 certs.go:195] generating shared ca certs ...
	I1213 12:03:10.018341  622913 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:03:10.018485  622913 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 12:03:10.018546  622913 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 12:03:10.018560  622913 certs.go:257] generating profile certs ...
	I1213 12:03:10.018675  622913 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key
	I1213 12:03:10.018739  622913 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b
	I1213 12:03:10.018788  622913 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key
	I1213 12:03:10.018902  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 12:03:10.018945  622913 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 12:03:10.018958  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 12:03:10.018984  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 12:03:10.019011  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 12:03:10.019049  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 12:03:10.019107  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 12:03:10.019800  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 12:03:10.070011  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 12:03:10.106991  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 12:03:10.124508  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 12:03:10.141854  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 12:03:10.159596  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 12:03:10.177143  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 12:03:10.193680  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 12:03:10.212540  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 12:03:10.230850  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 12:03:10.247982  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 12:03:10.265265  622913 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 12:03:10.280828  622913 ssh_runner.go:195] Run: openssl version
	I1213 12:03:10.287915  622913 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:03:10.295295  622913 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 12:03:10.302777  622913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:03:10.306712  622913 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:03:10.306788  622913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:03:10.347657  622913 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 12:03:10.355488  622913 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 12:03:10.362741  622913 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 12:03:10.370213  622913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 12:03:10.373963  622913 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 12:03:10.374024  622913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 12:03:10.415846  622913 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 12:03:10.423114  622913 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 12:03:10.430238  622913 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 12:03:10.437700  622913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 12:03:10.441526  622913 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 12:03:10.441626  622913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 12:03:10.482660  622913 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 12:03:10.490193  622913 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 12:03:10.493922  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 12:03:10.537559  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 12:03:10.580339  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 12:03:10.624474  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 12:03:10.668005  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 12:03:10.719243  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 12:03:10.787031  622913 kubeadm.go:401] StartCluster: {Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:03:10.787127  622913 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 12:03:10.787194  622913 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 12:03:10.866441  622913 cri.go:89] found id: ""
	I1213 12:03:10.866517  622913 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 12:03:10.878947  622913 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 12:03:10.878971  622913 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 12:03:10.879029  622913 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 12:03:10.887787  622913 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 12:03:10.888361  622913 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-307409" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:03:10.888611  622913 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-354468/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-307409" cluster setting kubeconfig missing "no-preload-307409" context setting]
	I1213 12:03:10.889058  622913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:03:10.890426  622913 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 12:03:10.898823  622913 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 12:03:10.898859  622913 kubeadm.go:602] duration metric: took 19.881679ms to restartPrimaryControlPlane
	I1213 12:03:10.898869  622913 kubeadm.go:403] duration metric: took 111.848044ms to StartCluster
	I1213 12:03:10.898903  622913 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:03:10.899000  622913 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:03:10.900707  622913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:03:10.900965  622913 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 12:03:10.901208  622913 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:03:10.901250  622913 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 12:03:10.901316  622913 addons.go:70] Setting storage-provisioner=true in profile "no-preload-307409"
	I1213 12:03:10.901329  622913 addons.go:239] Setting addon storage-provisioner=true in "no-preload-307409"
	I1213 12:03:10.901354  622913 host.go:66] Checking if "no-preload-307409" exists ...
	I1213 12:03:10.901796  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:10.902330  622913 addons.go:70] Setting dashboard=true in profile "no-preload-307409"
	I1213 12:03:10.902349  622913 addons.go:239] Setting addon dashboard=true in "no-preload-307409"
	W1213 12:03:10.902356  622913 addons.go:248] addon dashboard should already be in state true
	I1213 12:03:10.902383  622913 host.go:66] Checking if "no-preload-307409" exists ...
	I1213 12:03:10.902788  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:10.906749  622913 addons.go:70] Setting default-storageclass=true in profile "no-preload-307409"
	I1213 12:03:10.907002  622913 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-307409"
	I1213 12:03:10.907925  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:10.908085  622913 out.go:179] * Verifying Kubernetes components...
	I1213 12:03:10.911613  622913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:03:10.936135  622913 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 12:03:10.936200  622913 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 12:03:10.939926  622913 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 12:03:10.940040  622913 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:10.940057  622913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 12:03:10.940121  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:10.942800  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 12:03:10.942825  622913 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 12:03:10.942890  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:10.947265  622913 addons.go:239] Setting addon default-storageclass=true in "no-preload-307409"
	I1213 12:03:10.947306  622913 host.go:66] Checking if "no-preload-307409" exists ...
	I1213 12:03:10.947819  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:11.005750  622913 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 12:03:11.005772  622913 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 12:03:11.005782  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:11.005838  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:11.023641  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:11.041145  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:11.111003  622913 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:03:11.173593  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:11.173636  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 12:03:11.173654  622913 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 12:03:11.188163  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 12:03:11.188185  622913 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 12:03:11.213443  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 12:03:11.213508  622913 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 12:03:11.227236  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 12:03:11.230811  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 12:03:11.230883  622913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 12:03:11.251133  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 12:03:11.251205  622913 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 12:03:11.292200  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 12:03:11.292226  622913 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 12:03:11.305259  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 12:03:11.305283  622913 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 12:03:11.318210  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 12:03:11.318236  622913 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 12:03:11.331855  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:03:11.331882  622913 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 12:03:11.346399  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:11.535442  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:11.535581  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.535629  622913 retry.go:31] will retry after 290.823808ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.535633  622913 retry.go:31] will retry after 252.781045ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.535694  622913 node_ready.go:35] waiting up to 6m0s for node "no-preload-307409" to be "Ready" ...
	W1213 12:03:11.536032  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.536057  622913 retry.go:31] will retry after 294.061208ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.788663  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:11.827131  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 12:03:11.830443  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:11.858572  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.858608  622913 retry.go:31] will retry after 534.111043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:11.903268  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.903302  622913 retry.go:31] will retry after 517.641227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:11.928403  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.928440  622913 retry.go:31] will retry after 261.246628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.190196  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:12.253861  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.253905  622913 retry.go:31] will retry after 750.097801ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.392854  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:12.421390  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:12.466046  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.466119  622913 retry.go:31] will retry after 345.117349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:12.494512  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.494543  622913 retry.go:31] will retry after 582.433152ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.811477  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:12.872208  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.872254  622913 retry.go:31] will retry after 1.066115266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.004542  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:03:09.273871  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:09.773704  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:10.273974  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:10.773144  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:11.273093  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:11.773168  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:12.273119  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:12.773938  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:13.274064  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:13.285062  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:13.346306  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.346338  620795 retry.go:31] will retry after 9.878335415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.773923  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:13.077848  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:13.142906  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.142942  622913 retry.go:31] will retry after 477.26404ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:13.177073  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.177107  622913 retry.go:31] will retry after 558.594273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:13.536929  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:13.621309  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:13.684925  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.684962  622913 retry.go:31] will retry after 887.0827ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.735891  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:13.838454  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.838488  622913 retry.go:31] will retry after 1.840863262s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.938866  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:13.997740  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.997780  622913 retry.go:31] will retry after 1.50758238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:14.572279  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:14.649792  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:14.649830  622913 retry.go:31] will retry after 2.273525411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:15.505555  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:15.537094  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:15.566161  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:15.566200  622913 retry.go:31] will retry after 1.268984334s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:15.680410  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:15.739773  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:15.739804  622913 retry.go:31] will retry after 2.516127735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.835378  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:16.919361  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.919396  622913 retry.go:31] will retry after 2.060639493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.923603  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:16.987685  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.987717  622913 retry.go:31] will retry after 3.014723999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:18.037172  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:14.273845  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:14.773934  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:15.273954  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:15.774017  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:16.273243  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:16.748013  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:03:16.773600  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:16.899498  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.899555  620795 retry.go:31] will retry after 7.173965376s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:17.273146  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:17.504219  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:17.614341  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:17.614369  620795 retry.go:31] will retry after 8.805046452s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:17.773767  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:18.273931  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:18.773442  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:18.256769  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:18.385179  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:18.385215  622913 retry.go:31] will retry after 1.545787463s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:18.980290  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:19.083283  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:19.083326  622913 retry.go:31] will retry after 3.363160165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:19.931900  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:19.994541  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:19.994572  622913 retry.go:31] will retry after 3.448577935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:20.003109  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:20.075345  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:20.075383  622913 retry.go:31] will retry after 2.247696448s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:20.536209  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:22.323733  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:22.390042  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:22.390078  622913 retry.go:31] will retry after 4.701837343s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:22.447431  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:22.510069  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:22.510101  622913 retry.go:31] will retry after 8.996063036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:22.536655  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:19.273647  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:19.773235  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:20.273783  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:20.774109  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:21.273100  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:21.774041  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:22.273187  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:22.773919  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:23.224947  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:23.273354  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:23.287102  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:23.287132  620795 retry.go:31] will retry after 17.975754277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:23.774029  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:24.073794  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:24.135298  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:24.135337  620795 retry.go:31] will retry after 17.719019377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:23.443398  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:23.501606  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:23.501640  622913 retry.go:31] will retry after 3.90534406s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:24.537114  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:27.036285  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:27.092481  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:27.162031  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:27.162065  622913 retry.go:31] will retry after 11.355394108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:27.407221  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:27.478522  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:27.478557  622913 retry.go:31] will retry after 8.009668822s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:24.273481  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:24.773666  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:25.273142  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:25.773170  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:26.273652  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:26.420263  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:26.478183  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:26.478224  620795 retry.go:31] will retry after 20.903659468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:26.773685  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:27.273113  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:27.773126  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:28.273297  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:28.773524  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:29.537044  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:31.506350  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:31.537137  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:31.567063  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:31.567101  622913 retry.go:31] will retry after 5.348365924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:29.273854  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:29.773973  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:30.273040  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:30.773142  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:31.273258  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:31.773723  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:32.274053  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:32.774024  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:33.273125  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:33.773200  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:33.537277  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:35.488997  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:35.615701  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:35.615734  622913 retry.go:31] will retry after 18.593547057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:36.036633  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:36.916463  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:36.985838  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:36.985870  622913 retry.go:31] will retry after 7.879856322s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:34.273224  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:34.773126  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:35.273423  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:35.773837  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:36.273251  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:36.773088  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:37.273142  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:37.773099  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:38.273954  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:38.773678  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:38.518385  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:38.536542  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:38.629558  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:38.629596  622913 retry.go:31] will retry after 11.083764817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:40.537112  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:43.037066  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:39.273565  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:39.773916  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:40.274028  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:40.773120  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:41.263107  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:41.273658  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:41.328103  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:41.328152  620795 retry.go:31] will retry after 24.557962123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:41.773949  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:41.855229  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:41.913722  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:41.913758  620795 retry.go:31] will retry after 29.657634591s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:42.273168  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:42.773137  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:43.273064  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:43.773040  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:44.866836  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:44.926788  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:44.926822  622913 retry.go:31] will retry after 12.537177434s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:45.536544  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:47.537056  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:44.273531  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:44.773694  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:45.273864  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:45.773153  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:46.273336  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:46.773222  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:47.273977  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:47.382145  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:47.444684  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:47.444761  620795 retry.go:31] will retry after 14.939941469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:47.773125  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:48.273113  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:48.773715  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:49.714461  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:49.810126  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:49.810163  622913 retry.go:31] will retry after 17.034686012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:50.037110  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:52.537099  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:49.274132  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:49.773105  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:50.273278  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:50.773375  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:51.273108  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:51.773957  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:52.273086  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:52.773220  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:53.273134  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:53.773528  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:54.210466  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:54.276658  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:54.276693  622913 retry.go:31] will retry after 15.477790737s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:55.037124  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:57.464704  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:57.536423  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:57.546896  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:57.546941  622913 retry.go:31] will retry after 45.136010492s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:54.273748  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:54.773661  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:55.273945  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:55.773185  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:56.273156  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:56.773921  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:57.273352  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:03:57.273425  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:03:57.360759  620795 cri.go:89] found id: ""
	I1213 12:03:57.360784  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.360793  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:03:57.360799  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:03:57.360899  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:03:57.386673  620795 cri.go:89] found id: ""
	I1213 12:03:57.386699  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.386709  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:03:57.386715  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:03:57.386772  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:03:57.412179  620795 cri.go:89] found id: ""
	I1213 12:03:57.412202  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.412211  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:03:57.412217  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:03:57.412275  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:03:57.440758  620795 cri.go:89] found id: ""
	I1213 12:03:57.440782  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.440791  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:03:57.440797  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:03:57.440863  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:03:57.474164  620795 cri.go:89] found id: ""
	I1213 12:03:57.474189  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.474198  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:03:57.474205  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:03:57.474266  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:03:57.513790  620795 cri.go:89] found id: ""
	I1213 12:03:57.513811  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.513820  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:03:57.513826  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:03:57.513882  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:03:57.549685  620795 cri.go:89] found id: ""
	I1213 12:03:57.549708  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.549716  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:03:57.549723  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:03:57.549784  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:03:57.575809  620795 cri.go:89] found id: ""
	I1213 12:03:57.575830  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.575839  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:03:57.575848  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:03:57.575860  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:03:57.645191  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:03:57.645229  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:03:57.662016  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:03:57.662048  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:03:57.724395  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:03:57.715919    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.716483    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.718246    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.718931    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.720750    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:03:57.715919    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.716483    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.718246    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.718931    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.720750    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:03:57.724433  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:03:57.724446  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:03:57.752976  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:03:57.753012  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:04:00.036301  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:02.037075  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:00.282268  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:00.369064  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:00.369151  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:00.446224  620795 cri.go:89] found id: ""
	I1213 12:04:00.446257  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.446267  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:00.446274  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:00.446398  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:00.492701  620795 cri.go:89] found id: ""
	I1213 12:04:00.492728  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.492737  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:00.492744  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:00.492814  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:00.537493  620795 cri.go:89] found id: ""
	I1213 12:04:00.537573  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.537600  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:00.537617  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:00.537703  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:00.567417  620795 cri.go:89] found id: ""
	I1213 12:04:00.567457  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.567467  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:00.567493  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:00.567660  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:00.597259  620795 cri.go:89] found id: ""
	I1213 12:04:00.597333  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.597358  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:00.597371  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:00.597453  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:00.624935  620795 cri.go:89] found id: ""
	I1213 12:04:00.625008  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.625032  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:00.625053  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:00.625125  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:00.656802  620795 cri.go:89] found id: ""
	I1213 12:04:00.656830  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.656846  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:00.656853  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:00.656924  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:00.684243  620795 cri.go:89] found id: ""
	I1213 12:04:00.684318  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.684342  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:00.684364  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:00.684406  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:00.755205  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:00.755244  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:00.772314  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:00.772345  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:00.841157  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:00.832743    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.833321    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.835282    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.835830    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.836909    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:00.832743    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.833321    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.835282    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.835830    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.836909    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:00.841236  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:00.841257  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:00.870321  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:00.870357  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:02.384998  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:04:02.445321  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:02.445354  620795 retry.go:31] will retry after 47.283712675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:03.403559  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:03.414405  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:03.414472  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:03.440207  620795 cri.go:89] found id: ""
	I1213 12:04:03.440275  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.440299  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:03.440320  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:03.440406  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:03.473860  620795 cri.go:89] found id: ""
	I1213 12:04:03.473906  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.473916  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:03.473923  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:03.474005  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:03.500069  620795 cri.go:89] found id: ""
	I1213 12:04:03.500102  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.500111  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:03.500118  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:03.500194  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:03.550253  620795 cri.go:89] found id: ""
	I1213 12:04:03.550329  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.550353  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:03.550372  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:03.550459  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:03.595628  620795 cri.go:89] found id: ""
	I1213 12:04:03.595713  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.595737  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:03.595757  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:03.595871  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:03.626718  620795 cri.go:89] found id: ""
	I1213 12:04:03.626796  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.626827  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:03.626849  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:03.626954  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:03.657254  620795 cri.go:89] found id: ""
	I1213 12:04:03.657281  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.657290  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:03.657297  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:03.657356  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:03.682193  620795 cri.go:89] found id: ""
	I1213 12:04:03.682268  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.682292  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:03.682315  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:03.682355  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:03.750002  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:03.741882    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.742330    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.743987    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.744602    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.746402    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:03.741882    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.742330    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.743987    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.744602    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.746402    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:03.750025  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:03.750039  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:03.779008  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:03.779046  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:03.807344  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:03.807424  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:03.879158  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:03.879201  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:04:04.537094  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:06.845581  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:04:06.913058  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:06.913091  622913 retry.go:31] will retry after 30.701510805s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:07.036960  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:05.886355  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:04:05.944754  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:05.944842  620795 retry.go:31] will retry after 33.803790372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:06.397350  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:06.407918  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:06.407990  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:06.436013  620795 cri.go:89] found id: ""
	I1213 12:04:06.436040  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.436049  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:06.436056  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:06.436121  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:06.462051  620795 cri.go:89] found id: ""
	I1213 12:04:06.462074  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.462083  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:06.462089  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:06.462147  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:06.487916  620795 cri.go:89] found id: ""
	I1213 12:04:06.487943  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.487952  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:06.487959  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:06.488027  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:06.514150  620795 cri.go:89] found id: ""
	I1213 12:04:06.514181  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.514190  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:06.514196  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:06.514255  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:06.567862  620795 cri.go:89] found id: ""
	I1213 12:04:06.567900  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.567910  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:06.567917  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:06.567977  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:06.615399  620795 cri.go:89] found id: ""
	I1213 12:04:06.615428  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.615446  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:06.615453  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:06.615546  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:06.645078  620795 cri.go:89] found id: ""
	I1213 12:04:06.645150  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.645174  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:06.645196  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:06.645278  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:06.673976  620795 cri.go:89] found id: ""
	I1213 12:04:06.674002  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.674011  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:06.674022  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:06.674067  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:06.703467  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:06.703504  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:06.731693  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:06.731721  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:06.801110  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:06.801154  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:06.817774  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:06.817804  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:06.899087  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:06.890513    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.891812    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.893652    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.893965    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.895397    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:06.890513    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.891812    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.893652    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.893965    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.895397    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:04:09.536141  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:09.755504  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:04:09.840522  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:09.840549  622913 retry.go:31] will retry after 18.501787354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:11.536619  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:09.400132  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:09.410430  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:09.410500  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:09.440067  620795 cri.go:89] found id: ""
	I1213 12:04:09.440090  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.440100  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:09.440107  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:09.440167  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:09.470041  620795 cri.go:89] found id: ""
	I1213 12:04:09.470062  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.470071  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:09.470078  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:09.470135  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:09.496421  620795 cri.go:89] found id: ""
	I1213 12:04:09.496444  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.496453  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:09.496459  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:09.496516  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:09.535210  620795 cri.go:89] found id: ""
	I1213 12:04:09.535233  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.535241  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:09.535248  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:09.535322  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:09.593867  620795 cri.go:89] found id: ""
	I1213 12:04:09.593894  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.593905  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:09.593912  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:09.593967  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:09.633869  620795 cri.go:89] found id: ""
	I1213 12:04:09.633895  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.633904  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:09.633911  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:09.633967  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:09.660082  620795 cri.go:89] found id: ""
	I1213 12:04:09.660104  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.660113  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:09.660119  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:09.660180  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:09.686975  620795 cri.go:89] found id: ""
	I1213 12:04:09.687005  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.687013  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:09.687023  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:09.687035  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:09.756960  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:09.756994  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:09.779895  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:09.779929  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:09.858208  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:09.850094    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.850752    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.852494    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.853050    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.854767    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:09.850094    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.850752    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.852494    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.853050    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.854767    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:09.858229  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:09.858243  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:09.886438  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:09.886472  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:11.571741  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:04:11.635299  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:11.635338  620795 retry.go:31] will retry after 28.848947099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:12.418247  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:12.428921  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:12.428996  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:12.453422  620795 cri.go:89] found id: ""
	I1213 12:04:12.453447  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.453455  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:12.453462  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:12.453523  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:12.482791  620795 cri.go:89] found id: ""
	I1213 12:04:12.482818  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.482827  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:12.482834  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:12.482892  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:12.509185  620795 cri.go:89] found id: ""
	I1213 12:04:12.509207  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.509216  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:12.509222  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:12.509281  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:12.555782  620795 cri.go:89] found id: ""
	I1213 12:04:12.555810  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.555820  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:12.555868  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:12.555953  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:12.609661  620795 cri.go:89] found id: ""
	I1213 12:04:12.609682  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.609691  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:12.609697  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:12.609753  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:12.636223  620795 cri.go:89] found id: ""
	I1213 12:04:12.636251  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.636268  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:12.636275  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:12.636335  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:12.663456  620795 cri.go:89] found id: ""
	I1213 12:04:12.663484  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.663493  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:12.663499  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:12.663583  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:12.688687  620795 cri.go:89] found id: ""
	I1213 12:04:12.688714  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.688723  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:12.688733  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:12.688745  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:12.705209  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:12.705240  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:12.766977  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:12.758035    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.758936    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.760623    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.761225    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.762917    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:12.758035    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.758936    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.760623    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.761225    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.762917    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:12.767041  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:12.767064  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:12.795358  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:12.795396  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:12.823112  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:12.823143  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:04:14.037178  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:16.536405  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:15.388432  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:15.398781  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:15.398905  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:15.425880  620795 cri.go:89] found id: ""
	I1213 12:04:15.425920  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.425929  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:15.425935  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:15.426005  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:15.451424  620795 cri.go:89] found id: ""
	I1213 12:04:15.451467  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.451477  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:15.451486  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:15.451583  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:15.476481  620795 cri.go:89] found id: ""
	I1213 12:04:15.476525  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.476534  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:15.476541  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:15.476612  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:15.502062  620795 cri.go:89] found id: ""
	I1213 12:04:15.502088  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.502097  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:15.502104  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:15.502173  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:15.588057  620795 cri.go:89] found id: ""
	I1213 12:04:15.588132  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.588155  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:15.588175  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:15.588279  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:15.616479  620795 cri.go:89] found id: ""
	I1213 12:04:15.616506  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.616519  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:15.616526  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:15.616602  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:15.649712  620795 cri.go:89] found id: ""
	I1213 12:04:15.649789  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.649813  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:15.649827  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:15.649912  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:15.675926  620795 cri.go:89] found id: ""
	I1213 12:04:15.675995  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.676019  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:15.676034  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:15.676049  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:15.692725  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:15.692755  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:15.759900  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:15.751635    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.752539    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.754270    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.754749    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.756378    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:15.751635    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.752539    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.754270    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.754749    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.756378    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:15.759963  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:15.759989  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:15.789315  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:15.789425  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:15.818647  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:15.818675  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:18.385812  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:18.396389  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:18.396461  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:18.422777  620795 cri.go:89] found id: ""
	I1213 12:04:18.422800  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.422808  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:18.422814  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:18.422873  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:18.448579  620795 cri.go:89] found id: ""
	I1213 12:04:18.448607  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.448616  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:18.448622  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:18.448677  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:18.474629  620795 cri.go:89] found id: ""
	I1213 12:04:18.474707  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.474744  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:18.474768  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:18.474859  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:18.499793  620795 cri.go:89] found id: ""
	I1213 12:04:18.499819  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.499828  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:18.499837  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:18.499894  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:18.531333  620795 cri.go:89] found id: ""
	I1213 12:04:18.531368  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.531377  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:18.531383  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:18.531450  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:18.583893  620795 cri.go:89] found id: ""
	I1213 12:04:18.583923  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.583932  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:18.583939  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:18.584008  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:18.620082  620795 cri.go:89] found id: ""
	I1213 12:04:18.620120  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.620129  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:18.620135  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:18.620210  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:18.647112  620795 cri.go:89] found id: ""
	I1213 12:04:18.647137  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.647145  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:18.647155  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:18.647167  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:18.712791  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:18.712833  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:18.728892  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:18.728920  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:18.793078  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:18.784898    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.785594    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.787226    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.787863    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.789553    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:18.784898    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.785594    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.787226    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.787863    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.789553    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:18.793150  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:18.793172  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:18.821911  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:18.821947  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:04:18.537035  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:20.537076  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:23.036959  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:21.353995  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:21.364153  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:21.364265  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:21.389593  620795 cri.go:89] found id: ""
	I1213 12:04:21.389673  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.389690  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:21.389698  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:21.389773  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:21.418684  620795 cri.go:89] found id: ""
	I1213 12:04:21.418706  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.418715  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:21.418722  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:21.418778  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:21.442724  620795 cri.go:89] found id: ""
	I1213 12:04:21.442799  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.442822  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:21.442841  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:21.442927  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:21.472117  620795 cri.go:89] found id: ""
	I1213 12:04:21.472141  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.472150  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:21.472156  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:21.472213  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:21.501589  620795 cri.go:89] found id: ""
	I1213 12:04:21.501612  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.501621  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:21.501627  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:21.501688  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:21.563954  620795 cri.go:89] found id: ""
	I1213 12:04:21.564023  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.564046  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:21.564069  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:21.564151  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:21.612229  620795 cri.go:89] found id: ""
	I1213 12:04:21.612263  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.612273  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:21.612280  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:21.612339  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:21.639602  620795 cri.go:89] found id: ""
	I1213 12:04:21.639636  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.639645  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:21.639655  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:21.639669  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:21.705516  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:21.705552  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:21.722491  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:21.722521  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:21.783641  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:21.775744    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.776319    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.777813    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.778191    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.779744    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:21.775744    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.776319    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.777813    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.778191    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.779744    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:21.783663  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:21.783676  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:21.811307  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:21.811340  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:04:25.037157  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:27.037243  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:24.340508  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:24.351403  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:24.351482  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:24.382302  620795 cri.go:89] found id: ""
	I1213 12:04:24.382379  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.382404  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:24.382425  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:24.382538  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:24.408839  620795 cri.go:89] found id: ""
	I1213 12:04:24.408862  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.408871  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:24.408878  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:24.408936  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:24.435623  620795 cri.go:89] found id: ""
	I1213 12:04:24.435651  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.435661  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:24.435667  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:24.435727  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:24.461121  620795 cri.go:89] found id: ""
	I1213 12:04:24.461149  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.461158  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:24.461165  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:24.461251  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:24.486111  620795 cri.go:89] found id: ""
	I1213 12:04:24.486144  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.486153  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:24.486176  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:24.486257  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:24.511493  620795 cri.go:89] found id: ""
	I1213 12:04:24.511567  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.511578  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:24.511585  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:24.511646  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:24.546004  620795 cri.go:89] found id: ""
	I1213 12:04:24.546029  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.546052  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:24.546059  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:24.546129  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:24.573601  620795 cri.go:89] found id: ""
	I1213 12:04:24.573677  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.573699  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:24.573720  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:24.573758  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:24.651738  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:24.651779  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:24.669002  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:24.669035  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:24.734744  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:24.726695    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.727312    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.729032    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.729495    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.731022    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:24.726695    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.727312    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.729032    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.729495    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.731022    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:24.734767  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:24.734780  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:24.763652  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:24.763687  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:27.296287  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:27.306558  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:27.306632  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:27.331288  620795 cri.go:89] found id: ""
	I1213 12:04:27.331315  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.331324  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:27.331331  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:27.331388  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:27.357587  620795 cri.go:89] found id: ""
	I1213 12:04:27.357611  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.357620  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:27.357626  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:27.357681  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:27.383604  620795 cri.go:89] found id: ""
	I1213 12:04:27.383628  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.383637  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:27.383644  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:27.383699  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:27.408104  620795 cri.go:89] found id: ""
	I1213 12:04:27.408183  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.408199  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:27.408207  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:27.408273  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:27.434284  620795 cri.go:89] found id: ""
	I1213 12:04:27.434309  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.434318  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:27.434325  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:27.434389  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:27.459356  620795 cri.go:89] found id: ""
	I1213 12:04:27.459382  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.459391  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:27.459399  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:27.459457  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:27.484476  620795 cri.go:89] found id: ""
	I1213 12:04:27.484543  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.484558  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:27.484565  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:27.484630  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:27.510910  620795 cri.go:89] found id: ""
	I1213 12:04:27.510937  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.510946  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:27.510955  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:27.510967  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:27.543054  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:27.543085  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:27.641750  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:27.634259    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.634796    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.636509    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.637087    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.638180    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:27.634259    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.634796    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.636509    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.637087    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.638180    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:27.641818  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:27.641838  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:27.671375  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:27.671412  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:27.701704  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:27.701735  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:28.342721  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:04:28.405775  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:28.405881  622913 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 12:04:29.536294  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:31.536581  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:30.268871  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:30.279472  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:30.279561  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:30.305479  620795 cri.go:89] found id: ""
	I1213 12:04:30.305504  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.305513  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:30.305520  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:30.305577  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:30.330879  620795 cri.go:89] found id: ""
	I1213 12:04:30.330904  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.330914  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:30.330920  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:30.330978  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:30.358794  620795 cri.go:89] found id: ""
	I1213 12:04:30.358821  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.358830  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:30.358837  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:30.358899  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:30.384574  620795 cri.go:89] found id: ""
	I1213 12:04:30.384648  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.384662  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:30.384669  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:30.384728  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:30.409348  620795 cri.go:89] found id: ""
	I1213 12:04:30.409374  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.409383  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:30.409390  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:30.409460  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:30.435261  620795 cri.go:89] found id: ""
	I1213 12:04:30.435286  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.435295  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:30.435302  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:30.435357  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:30.459810  620795 cri.go:89] found id: ""
	I1213 12:04:30.459834  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.459843  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:30.459849  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:30.459906  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:30.485697  620795 cri.go:89] found id: ""
	I1213 12:04:30.485720  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.485728  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:30.485738  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:30.485749  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:30.513499  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:30.513534  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:30.574739  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:30.574767  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:30.658042  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:30.658078  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:30.678263  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:30.678291  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:30.741695  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:30.733736    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.734524    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.736026    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.736488    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.737955    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:30.733736    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.734524    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.736026    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.736488    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.737955    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:33.242096  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:33.253053  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:33.253146  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:33.279722  620795 cri.go:89] found id: ""
	I1213 12:04:33.279748  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.279756  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:33.279764  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:33.279820  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:33.306092  620795 cri.go:89] found id: ""
	I1213 12:04:33.306129  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.306139  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:33.306163  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:33.306252  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:33.332772  620795 cri.go:89] found id: ""
	I1213 12:04:33.332796  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.332813  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:33.332819  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:33.332882  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:33.367716  620795 cri.go:89] found id: ""
	I1213 12:04:33.367744  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.367754  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:33.367760  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:33.367822  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:33.400175  620795 cri.go:89] found id: ""
	I1213 12:04:33.400242  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.400258  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:33.400266  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:33.400325  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:33.424852  620795 cri.go:89] found id: ""
	I1213 12:04:33.424877  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.424887  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:33.424894  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:33.424984  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:33.453556  620795 cri.go:89] found id: ""
	I1213 12:04:33.453581  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.453590  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:33.453597  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:33.453653  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:33.479131  620795 cri.go:89] found id: ""
	I1213 12:04:33.479156  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.479165  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:33.479175  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:33.479187  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:33.549906  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:33.550637  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:33.572706  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:33.572863  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:33.662497  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:33.653770    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.654281    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.656228    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.656866    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.658492    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:33.653770    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.654281    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.656228    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.656866    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.658492    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:33.662522  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:33.662535  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:33.692067  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:33.692111  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:04:33.536622  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:36.036352  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:37.615506  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:04:37.688522  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:37.688627  622913 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 12:04:38.037102  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:36.220187  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:36.230829  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:36.230906  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:36.260247  620795 cri.go:89] found id: ""
	I1213 12:04:36.260271  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.260280  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:36.260286  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:36.260342  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:36.285940  620795 cri.go:89] found id: ""
	I1213 12:04:36.285973  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.285982  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:36.285988  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:36.286059  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:36.311531  620795 cri.go:89] found id: ""
	I1213 12:04:36.311553  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.311561  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:36.311568  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:36.311633  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:36.336755  620795 cri.go:89] found id: ""
	I1213 12:04:36.336849  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.336865  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:36.336873  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:36.336933  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:36.361652  620795 cri.go:89] found id: ""
	I1213 12:04:36.361676  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.361684  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:36.361690  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:36.361748  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:36.392507  620795 cri.go:89] found id: ""
	I1213 12:04:36.392530  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.392539  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:36.392545  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:36.392601  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:36.418503  620795 cri.go:89] found id: ""
	I1213 12:04:36.418526  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.418535  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:36.418540  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:36.418614  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:36.444832  620795 cri.go:89] found id: ""
	I1213 12:04:36.444856  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.444865  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:36.444874  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:36.444891  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:36.515523  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:36.515566  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:36.535671  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:36.535699  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:36.655383  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:36.646224    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.647083    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.648816    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.649375    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.651021    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:36.646224    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.647083    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.648816    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.649375    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.651021    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:36.655406  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:36.655421  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:36.684176  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:36.684212  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:39.215366  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:39.225843  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:39.225914  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	W1213 12:04:40.037338  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:42.538150  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:42.683554  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:04:42.744769  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:42.744869  622913 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 12:04:42.747993  622913 out.go:179] * Enabled addons: 
	I1213 12:04:42.750740  622913 addons.go:530] duration metric: took 1m31.849485278s for enable addons: enabled=[]
	I1213 12:04:39.251825  620795 cri.go:89] found id: ""
	I1213 12:04:39.251850  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.251860  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:39.251867  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:39.251927  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:39.280966  620795 cri.go:89] found id: ""
	I1213 12:04:39.280991  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.281000  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:39.281007  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:39.281063  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:39.305488  620795 cri.go:89] found id: ""
	I1213 12:04:39.305511  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.305520  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:39.305526  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:39.305583  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:39.330461  620795 cri.go:89] found id: ""
	I1213 12:04:39.330484  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.330493  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:39.330500  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:39.330556  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:39.355410  620795 cri.go:89] found id: ""
	I1213 12:04:39.355483  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.355507  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:39.355565  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:39.355706  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:39.384890  620795 cri.go:89] found id: ""
	I1213 12:04:39.384916  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.384926  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:39.384933  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:39.385017  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:39.409735  620795 cri.go:89] found id: ""
	I1213 12:04:39.409758  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.409767  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:39.409773  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:39.409833  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:39.439648  620795 cri.go:89] found id: ""
	I1213 12:04:39.439673  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.439685  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:39.439695  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:39.439706  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:39.505768  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:39.505803  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:39.525572  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:39.525602  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:39.624619  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:39.616542    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.617459    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.619080    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.619382    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.620943    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:39.616542    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.617459    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.619080    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.619382    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.620943    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:39.624643  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:39.624656  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:39.653269  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:39.653306  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:39.749621  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:04:39.805957  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:39.806064  620795 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 12:04:40.484759  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:04:40.549677  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:40.549776  620795 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 12:04:42.182348  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:42.195718  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:42.195860  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:42.224999  620795 cri.go:89] found id: ""
	I1213 12:04:42.225044  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.225058  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:42.225067  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:42.225192  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:42.254835  620795 cri.go:89] found id: ""
	I1213 12:04:42.254913  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.254949  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:42.254975  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:42.255077  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:42.283814  620795 cri.go:89] found id: ""
	I1213 12:04:42.283889  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.283916  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:42.283931  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:42.284014  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:42.315795  620795 cri.go:89] found id: ""
	I1213 12:04:42.315823  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.315859  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:42.315871  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:42.315954  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:42.342987  620795 cri.go:89] found id: ""
	I1213 12:04:42.343026  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.343035  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:42.343042  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:42.343114  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:42.368935  620795 cri.go:89] found id: ""
	I1213 12:04:42.368969  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.368978  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:42.368986  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:42.369052  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:42.398633  620795 cri.go:89] found id: ""
	I1213 12:04:42.398703  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.398727  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:42.398747  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:42.398834  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:42.424223  620795 cri.go:89] found id: ""
	I1213 12:04:42.424299  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.424324  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:42.424342  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:42.424367  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:42.453160  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:42.453198  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:42.486810  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:42.486840  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:42.567003  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:42.567043  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:42.606556  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:42.606591  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:42.678272  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:42.669759    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.670194    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.671849    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.672446    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.673383    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:42.669759    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.670194    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.671849    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.672446    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.673383    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:04:45.037213  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:47.536268  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:45.178582  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:45.193685  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:45.193792  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:45.236374  620795 cri.go:89] found id: ""
	I1213 12:04:45.236402  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.236411  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:45.236419  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:45.236487  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:45.279160  620795 cri.go:89] found id: ""
	I1213 12:04:45.279193  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.279203  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:45.279210  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:45.279281  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:45.308966  620795 cri.go:89] found id: ""
	I1213 12:04:45.308991  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.309000  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:45.309006  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:45.309065  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:45.337083  620795 cri.go:89] found id: ""
	I1213 12:04:45.337110  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.337119  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:45.337126  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:45.337212  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:45.366596  620795 cri.go:89] found id: ""
	I1213 12:04:45.366619  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.366628  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:45.366635  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:45.366694  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:45.391548  620795 cri.go:89] found id: ""
	I1213 12:04:45.391572  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.391581  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:45.391588  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:45.391649  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:45.418598  620795 cri.go:89] found id: ""
	I1213 12:04:45.418619  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.418628  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:45.418635  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:45.418700  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:45.448270  620795 cri.go:89] found id: ""
	I1213 12:04:45.448292  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.448301  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:45.448310  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:45.448321  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:45.478882  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:45.478907  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:45.548829  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:45.548916  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:45.567213  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:45.567382  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:45.681775  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:45.673956    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.674517    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.676147    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.676639    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.678185    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:45.673956    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.674517    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.676147    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.676639    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.678185    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:45.681800  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:45.681816  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:48.211634  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:48.222293  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:48.222364  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:48.249683  620795 cri.go:89] found id: ""
	I1213 12:04:48.249707  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.249715  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:48.249722  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:48.249785  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:48.277977  620795 cri.go:89] found id: ""
	I1213 12:04:48.277999  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.278009  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:48.278015  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:48.278072  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:48.304052  620795 cri.go:89] found id: ""
	I1213 12:04:48.304080  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.304089  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:48.304096  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:48.304153  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:48.334039  620795 cri.go:89] found id: ""
	I1213 12:04:48.334066  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.334075  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:48.334087  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:48.334151  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:48.364623  620795 cri.go:89] found id: ""
	I1213 12:04:48.364646  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.364654  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:48.364661  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:48.364723  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:48.389613  620795 cri.go:89] found id: ""
	I1213 12:04:48.389684  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.389707  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:48.389718  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:48.389797  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:48.418439  620795 cri.go:89] found id: ""
	I1213 12:04:48.418467  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.418477  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:48.418485  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:48.418544  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:48.446312  620795 cri.go:89] found id: ""
	I1213 12:04:48.446341  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.446350  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:48.446360  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:48.446372  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:48.463031  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:48.463116  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:48.558736  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:48.546104    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.546489    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.550180    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.550521    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.554948    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:48.546104    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.546489    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.550180    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.550521    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.554948    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:48.558767  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:48.558782  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:48.606808  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:48.606885  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:48.638169  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:48.638199  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:49.729332  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:04:49.791669  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:49.791778  620795 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 12:04:49.794717  620795 out.go:179] * Enabled addons: 
	W1213 12:04:50.037029  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:52.037265  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:49.797659  620795 addons.go:530] duration metric: took 1m53.008142261s for enable addons: enabled=[]
	I1213 12:04:51.210580  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:51.221809  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:51.221877  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:51.247182  620795 cri.go:89] found id: ""
	I1213 12:04:51.247259  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.247282  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:51.247301  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:51.247396  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:51.275541  620795 cri.go:89] found id: ""
	I1213 12:04:51.275608  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.275623  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:51.275631  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:51.275695  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:51.300774  620795 cri.go:89] found id: ""
	I1213 12:04:51.300866  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.300889  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:51.300902  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:51.300973  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:51.330039  620795 cri.go:89] found id: ""
	I1213 12:04:51.330064  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.330074  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:51.330080  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:51.330152  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:51.358455  620795 cri.go:89] found id: ""
	I1213 12:04:51.358482  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.358491  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:51.358497  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:51.358556  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:51.387907  620795 cri.go:89] found id: ""
	I1213 12:04:51.387933  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.387942  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:51.387948  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:51.388011  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:51.414050  620795 cri.go:89] found id: ""
	I1213 12:04:51.414075  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.414084  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:51.414091  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:51.414148  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:51.440682  620795 cri.go:89] found id: ""
	I1213 12:04:51.440715  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.440729  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:51.440739  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:51.440752  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:51.502275  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:51.494090    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.494838    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.496561    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.497152    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.498687    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:51.494090    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.494838    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.496561    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.497152    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.498687    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:51.502296  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:51.502308  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:51.533683  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:51.533722  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:51.590439  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:51.590468  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:51.668678  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:51.668719  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:54.186166  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:54.196649  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:54.196718  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:54.221630  620795 cri.go:89] found id: ""
	I1213 12:04:54.221656  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.221665  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:54.221672  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:54.221729  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1213 12:04:54.537026  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:56.537082  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:54.246332  620795 cri.go:89] found id: ""
	I1213 12:04:54.246354  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.246362  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:54.246368  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:54.246425  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:54.274363  620795 cri.go:89] found id: ""
	I1213 12:04:54.274385  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.274396  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:54.274405  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:54.274465  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:54.299013  620795 cri.go:89] found id: ""
	I1213 12:04:54.299036  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.299045  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:54.299051  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:54.299115  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:54.325098  620795 cri.go:89] found id: ""
	I1213 12:04:54.325123  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.325133  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:54.325140  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:54.325200  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:54.350290  620795 cri.go:89] found id: ""
	I1213 12:04:54.350318  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.350327  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:54.350334  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:54.350394  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:54.377186  620795 cri.go:89] found id: ""
	I1213 12:04:54.377209  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.377218  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:54.377224  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:54.377283  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:54.409137  620795 cri.go:89] found id: ""
	I1213 12:04:54.409164  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.409174  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:54.409184  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:54.409196  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:54.426177  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:54.426207  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:54.491873  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:54.483806    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.484379    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.486107    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.486575    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.488295    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:54.483806    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.484379    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.486107    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.486575    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.488295    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:54.491896  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:54.491909  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:54.521061  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:54.521153  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:54.580593  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:54.580623  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:57.166168  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:57.177178  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:57.177255  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:57.209135  620795 cri.go:89] found id: ""
	I1213 12:04:57.209170  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.209179  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:57.209186  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:57.209254  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:57.236323  620795 cri.go:89] found id: ""
	I1213 12:04:57.236359  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.236368  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:57.236375  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:57.236433  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:57.261970  620795 cri.go:89] found id: ""
	I1213 12:04:57.261992  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.262001  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:57.262007  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:57.262064  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:57.287149  620795 cri.go:89] found id: ""
	I1213 12:04:57.287171  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.287179  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:57.287186  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:57.287242  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:57.312282  620795 cri.go:89] found id: ""
	I1213 12:04:57.312307  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.312316  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:57.312322  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:57.312380  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:57.341454  620795 cri.go:89] found id: ""
	I1213 12:04:57.341480  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.341489  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:57.341496  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:57.341559  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:57.366694  620795 cri.go:89] found id: ""
	I1213 12:04:57.366718  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.366729  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:57.366736  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:57.366795  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:57.392434  620795 cri.go:89] found id: ""
	I1213 12:04:57.392459  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.392468  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:57.392478  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:57.392490  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:57.426595  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:57.426622  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:57.490950  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:57.490984  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:57.508294  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:57.508326  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:57.637638  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:57.628307    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.629849    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.630282    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.632060    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.632815    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:57.628307    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.629849    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.630282    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.632060    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.632815    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:57.637717  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:57.637746  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:04:59.037033  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:01.536339  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:00.166037  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:00.211490  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:00.212114  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:00.294178  620795 cri.go:89] found id: ""
	I1213 12:05:00.294201  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.294210  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:00.294217  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:00.294285  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:00.376480  620795 cri.go:89] found id: ""
	I1213 12:05:00.376506  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.376516  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:00.376523  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:00.376593  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:00.416213  620795 cri.go:89] found id: ""
	I1213 12:05:00.416240  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.416250  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:00.416261  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:00.416329  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:00.449590  620795 cri.go:89] found id: ""
	I1213 12:05:00.449620  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.449629  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:00.449637  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:00.449722  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:00.479461  620795 cri.go:89] found id: ""
	I1213 12:05:00.479486  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.479495  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:00.479502  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:00.479589  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:00.509094  620795 cri.go:89] found id: ""
	I1213 12:05:00.509123  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.509132  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:00.509138  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:00.509204  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:00.583923  620795 cri.go:89] found id: ""
	I1213 12:05:00.583952  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.583962  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:00.583969  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:00.584049  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:00.624268  620795 cri.go:89] found id: ""
	I1213 12:05:00.624299  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.624309  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:00.624322  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:00.624334  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:00.701394  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:00.692593    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.693524    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.695465    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.695924    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.697491    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:00.692593    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.693524    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.695465    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.695924    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.697491    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:00.701419  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:00.701432  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:00.730125  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:00.730170  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:00.760465  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:00.760494  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:00.826577  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:00.826619  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:03.345642  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:03.359010  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:03.359082  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:03.391792  620795 cri.go:89] found id: ""
	I1213 12:05:03.391816  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.391825  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:03.391832  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:03.391889  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:03.418730  620795 cri.go:89] found id: ""
	I1213 12:05:03.418759  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.418768  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:03.418774  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:03.418831  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:03.447034  620795 cri.go:89] found id: ""
	I1213 12:05:03.447062  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.447070  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:03.447077  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:03.447137  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:03.471737  620795 cri.go:89] found id: ""
	I1213 12:05:03.471763  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.471772  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:03.471778  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:03.471832  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:03.496618  620795 cri.go:89] found id: ""
	I1213 12:05:03.496641  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.496650  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:03.496656  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:03.496721  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:03.538834  620795 cri.go:89] found id: ""
	I1213 12:05:03.538855  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.538901  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:03.538915  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:03.539006  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:03.577353  620795 cri.go:89] found id: ""
	I1213 12:05:03.577375  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.577437  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:03.577445  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:03.577590  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:03.613163  620795 cri.go:89] found id: ""
	I1213 12:05:03.613234  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.613247  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:03.613257  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:03.613296  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:03.652148  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:03.652174  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:03.718838  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:03.718879  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:03.736159  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:03.736189  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:03.801478  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:03.792834    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.793250    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.794944    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.795726    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.797245    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:03.792834    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.793250    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.794944    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.795726    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.797245    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:03.801504  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:03.801519  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:05:03.537034  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:06.036238  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:08.037112  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:06.330711  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:06.341136  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:06.341246  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:06.366066  620795 cri.go:89] found id: ""
	I1213 12:05:06.366099  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.366108  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:06.366114  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:06.366178  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:06.394525  620795 cri.go:89] found id: ""
	I1213 12:05:06.394563  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.394573  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:06.394580  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:06.394649  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:06.424244  620795 cri.go:89] found id: ""
	I1213 12:05:06.424312  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.424336  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:06.424357  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:06.424449  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:06.450497  620795 cri.go:89] found id: ""
	I1213 12:05:06.450529  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.450538  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:06.450545  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:06.450614  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:06.475735  620795 cri.go:89] found id: ""
	I1213 12:05:06.475759  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.475768  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:06.475774  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:06.475835  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:06.501224  620795 cri.go:89] found id: ""
	I1213 12:05:06.501248  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.501257  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:06.501263  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:06.501322  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:06.548385  620795 cri.go:89] found id: ""
	I1213 12:05:06.548410  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.548419  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:06.548425  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:06.548498  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:06.613365  620795 cri.go:89] found id: ""
	I1213 12:05:06.613444  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.613469  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:06.613490  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:06.613525  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:06.642036  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:06.642067  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:06.675194  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:06.675218  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:06.743889  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:06.743933  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:06.760968  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:06.761004  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:06.828998  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:06.821066    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.821670    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.823321    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.823818    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.825418    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:06.821066    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.821670    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.823321    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.823818    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.825418    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:05:10.037152  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:12.536415  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:09.329981  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:09.340577  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:09.340644  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:09.368902  620795 cri.go:89] found id: ""
	I1213 12:05:09.368926  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.368935  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:09.368941  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:09.369004  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:09.397232  620795 cri.go:89] found id: ""
	I1213 12:05:09.397263  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.397273  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:09.397280  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:09.397353  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:09.424425  620795 cri.go:89] found id: ""
	I1213 12:05:09.424455  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.424465  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:09.424471  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:09.424529  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:09.449435  620795 cri.go:89] found id: ""
	I1213 12:05:09.449457  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.449466  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:09.449472  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:09.449534  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:09.473489  620795 cri.go:89] found id: ""
	I1213 12:05:09.473512  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.473521  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:09.473527  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:09.473584  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:09.503533  620795 cri.go:89] found id: ""
	I1213 12:05:09.503560  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.503569  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:09.503576  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:09.503632  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:09.569217  620795 cri.go:89] found id: ""
	I1213 12:05:09.569286  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.569312  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:09.569331  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:09.569431  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:09.616563  620795 cri.go:89] found id: ""
	I1213 12:05:09.616632  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.616663  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:09.616686  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:09.616726  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:09.645190  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:09.645217  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:09.710725  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:09.710760  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:09.727200  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:09.727231  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:09.793579  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:09.785934    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.786467    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.787974    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.788480    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.790090    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:09.785934    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.786467    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.787974    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.788480    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.790090    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:09.793611  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:09.793625  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:12.321617  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:12.332442  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:12.332517  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:12.357812  620795 cri.go:89] found id: ""
	I1213 12:05:12.357835  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.357844  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:12.357851  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:12.357912  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:12.383803  620795 cri.go:89] found id: ""
	I1213 12:05:12.383827  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.383836  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:12.383842  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:12.383902  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:12.408966  620795 cri.go:89] found id: ""
	I1213 12:05:12.409044  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.409061  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:12.409069  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:12.409183  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:12.438466  620795 cri.go:89] found id: ""
	I1213 12:05:12.438491  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.438499  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:12.438506  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:12.438562  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:12.468347  620795 cri.go:89] found id: ""
	I1213 12:05:12.468375  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.468385  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:12.468391  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:12.468455  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:12.493833  620795 cri.go:89] found id: ""
	I1213 12:05:12.493860  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.493869  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:12.493876  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:12.493936  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:12.540091  620795 cri.go:89] found id: ""
	I1213 12:05:12.540120  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.540130  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:12.540137  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:12.540202  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:12.593138  620795 cri.go:89] found id: ""
	I1213 12:05:12.593165  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.593174  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:12.593184  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:12.593195  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:12.670751  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:12.670790  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:12.688162  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:12.688196  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:12.753953  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:12.745930    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.746540    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.748217    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.748692    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.750290    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:12.745930    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.746540    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.748217    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.748692    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.750290    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:12.753978  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:12.753990  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:12.782410  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:12.782447  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:05:14.537113  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:17.037129  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:15.314766  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:15.325177  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:15.325244  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:15.350233  620795 cri.go:89] found id: ""
	I1213 12:05:15.350259  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.350269  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:15.350276  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:15.350332  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:15.375095  620795 cri.go:89] found id: ""
	I1213 12:05:15.375121  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.375131  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:15.375138  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:15.375198  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:15.400509  620795 cri.go:89] found id: ""
	I1213 12:05:15.400531  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.400539  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:15.400545  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:15.400604  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:15.429727  620795 cri.go:89] found id: ""
	I1213 12:05:15.429749  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.429758  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:15.429765  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:15.429818  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:15.455300  620795 cri.go:89] found id: ""
	I1213 12:05:15.455321  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.455330  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:15.455336  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:15.455393  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:15.480516  620795 cri.go:89] found id: ""
	I1213 12:05:15.480540  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.480549  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:15.480556  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:15.480617  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:15.508281  620795 cri.go:89] found id: ""
	I1213 12:05:15.508358  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.508375  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:15.508382  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:15.508453  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:15.569260  620795 cri.go:89] found id: ""
	I1213 12:05:15.569286  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.569295  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:15.569304  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:15.569317  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:15.653590  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:15.653630  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:15.670770  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:15.670805  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:15.734152  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:15.725752    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.726494    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.728223    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.728860    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.730656    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:15.725752    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.726494    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.728223    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.728860    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.730656    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:15.734221  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:15.734248  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:15.762906  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:15.762941  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:18.292789  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:18.303334  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:18.303410  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:18.329348  620795 cri.go:89] found id: ""
	I1213 12:05:18.329372  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.329382  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:18.329389  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:18.329455  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:18.358617  620795 cri.go:89] found id: ""
	I1213 12:05:18.358638  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.358647  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:18.358653  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:18.358710  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:18.383565  620795 cri.go:89] found id: ""
	I1213 12:05:18.383589  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.383597  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:18.383603  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:18.383666  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:18.409351  620795 cri.go:89] found id: ""
	I1213 12:05:18.409378  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.409387  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:18.409394  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:18.409456  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:18.435771  620795 cri.go:89] found id: ""
	I1213 12:05:18.435797  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.435806  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:18.435813  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:18.435875  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:18.464513  620795 cri.go:89] found id: ""
	I1213 12:05:18.464539  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.464549  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:18.464556  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:18.464659  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:18.490219  620795 cri.go:89] found id: ""
	I1213 12:05:18.490244  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.490252  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:18.490260  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:18.490317  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:18.532969  620795 cri.go:89] found id: ""
	I1213 12:05:18.532995  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.533004  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:18.533013  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:18.533027  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:18.595123  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:18.595154  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:18.672161  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:18.672201  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:18.689194  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:18.689222  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:18.754503  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:18.745575    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.746298    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.748026    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.748666    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.750610    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:18.745575    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.746298    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.748026    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.748666    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.750610    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:18.754526  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:18.754539  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:05:19.537079  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:22.037194  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:21.283365  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:21.294092  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:21.294183  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:21.321526  620795 cri.go:89] found id: ""
	I1213 12:05:21.321549  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.321559  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:21.321565  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:21.321622  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:21.349919  620795 cri.go:89] found id: ""
	I1213 12:05:21.349943  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.349952  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:21.349958  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:21.350021  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:21.379881  620795 cri.go:89] found id: ""
	I1213 12:05:21.379906  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.379915  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:21.379922  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:21.379982  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:21.405656  620795 cri.go:89] found id: ""
	I1213 12:05:21.405679  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.405687  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:21.405694  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:21.405754  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:21.435716  620795 cri.go:89] found id: ""
	I1213 12:05:21.435752  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.435762  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:21.435769  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:21.435839  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:21.461176  620795 cri.go:89] found id: ""
	I1213 12:05:21.461199  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.461207  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:21.461214  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:21.461271  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:21.487321  620795 cri.go:89] found id: ""
	I1213 12:05:21.487357  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.487366  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:21.487372  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:21.487438  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:21.513663  620795 cri.go:89] found id: ""
	I1213 12:05:21.513687  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.513696  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:21.513706  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:21.513740  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:21.547538  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:21.547713  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:21.648986  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:21.641895    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.642288    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.643954    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.644494    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.645453    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:21.641895    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.642288    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.643954    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.644494    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.645453    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:21.649007  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:21.649020  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:21.676895  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:21.676929  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:21.706237  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:21.706268  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:05:24.536202  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:26.537127  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:24.271406  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:24.281916  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:24.281984  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:24.306547  620795 cri.go:89] found id: ""
	I1213 12:05:24.306570  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.306579  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:24.306586  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:24.306645  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:24.334194  620795 cri.go:89] found id: ""
	I1213 12:05:24.334218  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.334227  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:24.334234  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:24.334291  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:24.360113  620795 cri.go:89] found id: ""
	I1213 12:05:24.360139  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.360148  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:24.360154  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:24.360219  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:24.385854  620795 cri.go:89] found id: ""
	I1213 12:05:24.385879  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.385889  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:24.385896  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:24.385960  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:24.411999  620795 cri.go:89] found id: ""
	I1213 12:05:24.412025  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.412034  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:24.412042  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:24.412102  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:24.438300  620795 cri.go:89] found id: ""
	I1213 12:05:24.438325  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.438335  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:24.438347  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:24.438405  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:24.464325  620795 cri.go:89] found id: ""
	I1213 12:05:24.464351  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.464361  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:24.464369  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:24.464430  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:24.491896  620795 cri.go:89] found id: ""
	I1213 12:05:24.491920  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.491930  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:24.491939  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:24.491971  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:24.519363  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:24.519445  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:24.616473  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:24.616502  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:24.692608  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:24.692645  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:24.711650  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:24.711689  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:24.775602  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:24.767043    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.768309    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.769606    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.770273    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.771935    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:24.767043    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.768309    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.769606    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.770273    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.771935    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:27.275849  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:27.286597  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:27.286680  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:27.311787  620795 cri.go:89] found id: ""
	I1213 12:05:27.311813  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.311822  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:27.311829  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:27.311893  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:27.341056  620795 cri.go:89] found id: ""
	I1213 12:05:27.341123  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.341146  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:27.341160  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:27.341233  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:27.365944  620795 cri.go:89] found id: ""
	I1213 12:05:27.365978  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.365986  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:27.365993  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:27.366057  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:27.390576  620795 cri.go:89] found id: ""
	I1213 12:05:27.390611  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.390626  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:27.390633  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:27.390702  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:27.420415  620795 cri.go:89] found id: ""
	I1213 12:05:27.420439  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.420448  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:27.420454  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:27.420516  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:27.445745  620795 cri.go:89] found id: ""
	I1213 12:05:27.445812  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.445835  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:27.445853  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:27.445936  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:27.475470  620795 cri.go:89] found id: ""
	I1213 12:05:27.475508  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.475538  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:27.475547  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:27.475615  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:27.502195  620795 cri.go:89] found id: ""
	I1213 12:05:27.502222  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.502231  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:27.502240  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:27.502252  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:27.597636  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:27.597744  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:27.629736  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:27.629763  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:27.694305  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:27.686679    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.687417    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.688918    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.689354    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.690840    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:27.686679    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.687417    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.688918    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.689354    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.690840    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:27.694327  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:27.694339  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:27.723090  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:27.723129  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:05:29.037051  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:31.536823  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:30.253217  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:30.264373  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:30.264446  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:30.290413  620795 cri.go:89] found id: ""
	I1213 12:05:30.290440  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.290450  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:30.290457  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:30.290517  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:30.318052  620795 cri.go:89] found id: ""
	I1213 12:05:30.318079  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.318096  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:30.318104  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:30.318172  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:30.343233  620795 cri.go:89] found id: ""
	I1213 12:05:30.343267  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.343277  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:30.343283  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:30.343349  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:30.373053  620795 cri.go:89] found id: ""
	I1213 12:05:30.373077  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.373086  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:30.373092  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:30.373149  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:30.401783  620795 cri.go:89] found id: ""
	I1213 12:05:30.401862  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.401879  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:30.401886  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:30.401955  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:30.427557  620795 cri.go:89] found id: ""
	I1213 12:05:30.427580  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.427589  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:30.427595  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:30.427652  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:30.452324  620795 cri.go:89] found id: ""
	I1213 12:05:30.452404  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.452426  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:30.452445  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:30.452538  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:30.485213  620795 cri.go:89] found id: ""
	I1213 12:05:30.485283  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.485307  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:30.485325  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:30.485337  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:30.567099  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:30.571250  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:30.599905  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:30.599987  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:30.671402  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:30.663820    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.664552    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.665833    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.666310    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.667892    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:30.663820    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.664552    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.665833    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.666310    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.667892    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:30.671475  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:30.671544  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:30.700275  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:30.700310  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:33.229307  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:33.240030  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:33.240101  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:33.264516  620795 cri.go:89] found id: ""
	I1213 12:05:33.264540  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.264550  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:33.264557  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:33.264622  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:33.288665  620795 cri.go:89] found id: ""
	I1213 12:05:33.288694  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.288704  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:33.288711  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:33.288772  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:33.318238  620795 cri.go:89] found id: ""
	I1213 12:05:33.318314  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.318338  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:33.318356  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:33.318437  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:33.342548  620795 cri.go:89] found id: ""
	I1213 12:05:33.342582  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.342592  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:33.342598  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:33.342667  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:33.368791  620795 cri.go:89] found id: ""
	I1213 12:05:33.368814  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.368823  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:33.368829  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:33.368887  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:33.395218  620795 cri.go:89] found id: ""
	I1213 12:05:33.395254  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.395263  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:33.395270  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:33.395342  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:33.422228  620795 cri.go:89] found id: ""
	I1213 12:05:33.422263  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.422272  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:33.422279  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:33.422345  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:33.448101  620795 cri.go:89] found id: ""
	I1213 12:05:33.448126  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.448136  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:33.448146  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:33.448164  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:33.513958  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:33.513995  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:33.536519  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:33.536547  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:33.642718  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:33.634504    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.635083    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.636790    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.637471    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.638479    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:33.634504    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.635083    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.636790    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.637471    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.638479    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:33.642742  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:33.642757  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:33.671233  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:33.671268  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:05:34.036325  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:36.536291  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:36.205718  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:36.216490  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:36.216599  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:36.242239  620795 cri.go:89] found id: ""
	I1213 12:05:36.242267  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.242277  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:36.242284  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:36.242345  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:36.267114  620795 cri.go:89] found id: ""
	I1213 12:05:36.267140  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.267149  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:36.267155  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:36.267221  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:36.292484  620795 cri.go:89] found id: ""
	I1213 12:05:36.292510  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.292519  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:36.292525  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:36.292586  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:36.317342  620795 cri.go:89] found id: ""
	I1213 12:05:36.317365  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.317374  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:36.317380  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:36.317442  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:36.346675  620795 cri.go:89] found id: ""
	I1213 12:05:36.346746  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.346770  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:36.346788  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:36.346878  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:36.374350  620795 cri.go:89] found id: ""
	I1213 12:05:36.374416  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.374440  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:36.374459  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:36.374550  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:36.401836  620795 cri.go:89] found id: ""
	I1213 12:05:36.401904  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.401927  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:36.401947  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:36.402023  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:36.436530  620795 cri.go:89] found id: ""
	I1213 12:05:36.436612  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.436635  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:36.436653  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:36.436680  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:36.464595  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:36.464663  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:36.550070  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:36.550121  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:36.581383  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:36.581414  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:36.674763  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:36.666501    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.667311    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.668765    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.669457    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.671114    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:36.666501    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.667311    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.668765    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.669457    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.671114    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:36.674830  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:36.674854  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:39.203663  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:39.214134  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:39.214211  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	W1213 12:05:39.036349  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:41.036401  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:43.037206  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:39.240674  620795 cri.go:89] found id: ""
	I1213 12:05:39.240705  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.240714  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:39.240721  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:39.240786  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:39.265873  620795 cri.go:89] found id: ""
	I1213 12:05:39.265895  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.265903  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:39.265909  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:39.265966  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:39.291928  620795 cri.go:89] found id: ""
	I1213 12:05:39.291952  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.291960  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:39.291978  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:39.292037  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:39.317111  620795 cri.go:89] found id: ""
	I1213 12:05:39.317144  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.317153  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:39.317160  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:39.317219  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:39.341971  620795 cri.go:89] found id: ""
	I1213 12:05:39.341993  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.342002  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:39.342009  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:39.342065  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:39.370095  620795 cri.go:89] found id: ""
	I1213 12:05:39.370166  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.370192  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:39.370212  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:39.370297  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:39.396661  620795 cri.go:89] found id: ""
	I1213 12:05:39.396740  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.396765  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:39.396777  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:39.396855  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:39.426139  620795 cri.go:89] found id: ""
	I1213 12:05:39.426167  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.426177  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:39.426188  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:39.426199  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:39.458970  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:39.459002  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:39.525484  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:39.525523  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:39.554066  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:39.554149  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:39.647487  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:39.639358    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.640049    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.641742    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.642473    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.644045    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:39.639358    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.640049    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.641742    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.642473    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.644045    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:39.647508  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:39.647543  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:42.175675  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:42.189064  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:42.189149  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:42.220105  620795 cri.go:89] found id: ""
	I1213 12:05:42.220135  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.220156  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:42.220164  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:42.220229  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:42.250459  620795 cri.go:89] found id: ""
	I1213 12:05:42.250486  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.250495  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:42.250502  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:42.250570  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:42.278746  620795 cri.go:89] found id: ""
	I1213 12:05:42.278773  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.278785  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:42.278793  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:42.278855  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:42.307046  620795 cri.go:89] found id: ""
	I1213 12:05:42.307073  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.307083  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:42.307092  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:42.307153  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:42.335010  620795 cri.go:89] found id: ""
	I1213 12:05:42.335035  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.335046  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:42.335052  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:42.335114  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:42.362128  620795 cri.go:89] found id: ""
	I1213 12:05:42.362154  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.362163  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:42.362170  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:42.362231  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:42.396146  620795 cri.go:89] found id: ""
	I1213 12:05:42.396175  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.396186  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:42.396193  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:42.396254  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:42.423111  620795 cri.go:89] found id: ""
	I1213 12:05:42.423137  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.423146  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:42.423155  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:42.423167  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:42.440295  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:42.440325  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:42.504038  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:42.496153    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.496984    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.498582    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.499023    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.500536    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:42.496153    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.496984    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.498582    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.499023    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.500536    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:42.504059  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:42.504071  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:42.550928  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:42.550966  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:42.608904  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:42.608935  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:05:45.037527  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:47.536245  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:45.181124  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:45.197731  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:45.197873  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:45.246027  620795 cri.go:89] found id: ""
	I1213 12:05:45.246070  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.246081  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:45.246106  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:45.246220  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:45.279332  620795 cri.go:89] found id: ""
	I1213 12:05:45.279388  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.279398  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:45.279404  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:45.279509  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:45.314910  620795 cri.go:89] found id: ""
	I1213 12:05:45.314988  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.315000  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:45.315010  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:45.315114  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:45.343055  620795 cri.go:89] found id: ""
	I1213 12:05:45.343130  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.343153  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:45.343175  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:45.343282  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:45.370166  620795 cri.go:89] found id: ""
	I1213 12:05:45.370240  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.370275  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:45.370299  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:45.370391  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:45.396456  620795 cri.go:89] found id: ""
	I1213 12:05:45.396480  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.396489  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:45.396495  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:45.396550  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:45.421687  620795 cri.go:89] found id: ""
	I1213 12:05:45.421711  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.421720  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:45.421726  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:45.421781  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:45.446648  620795 cri.go:89] found id: ""
	I1213 12:05:45.446672  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.446681  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:45.446691  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:45.446702  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:45.512020  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:45.512055  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:45.543051  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:45.543084  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:45.640767  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:45.633029    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.633452    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.634983    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.635597    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.637148    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:45.633029    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.633452    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.634983    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.635597    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.637148    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:45.640789  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:45.640802  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:45.670787  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:45.670822  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:48.201632  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:48.211975  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:48.212046  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:48.241331  620795 cri.go:89] found id: ""
	I1213 12:05:48.241355  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.241364  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:48.241371  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:48.241430  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:48.266481  620795 cri.go:89] found id: ""
	I1213 12:05:48.266506  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.266515  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:48.266523  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:48.266581  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:48.292562  620795 cri.go:89] found id: ""
	I1213 12:05:48.292587  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.292597  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:48.292604  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:48.292666  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:48.316829  620795 cri.go:89] found id: ""
	I1213 12:05:48.316853  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.316862  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:48.316869  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:48.316928  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:48.341279  620795 cri.go:89] found id: ""
	I1213 12:05:48.341304  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.341313  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:48.341320  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:48.341395  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:48.370602  620795 cri.go:89] found id: ""
	I1213 12:05:48.370668  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.370684  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:48.370692  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:48.370757  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:48.395975  620795 cri.go:89] found id: ""
	I1213 12:05:48.396001  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.396011  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:48.396017  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:48.396076  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:48.422104  620795 cri.go:89] found id: ""
	I1213 12:05:48.422129  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.422139  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:48.422150  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:48.422163  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:48.487414  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:48.487451  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:48.504893  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:48.504924  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:48.613440  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:48.605194    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.606037    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.607690    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.608269    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.609790    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:48.605194    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.606037    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.607690    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.608269    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.609790    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:48.613472  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:48.613485  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:48.643454  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:48.643496  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:05:49.537116  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:52.036281  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:51.173081  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:51.184091  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:51.184220  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:51.209714  620795 cri.go:89] found id: ""
	I1213 12:05:51.209741  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.209751  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:51.209757  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:51.209815  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:51.236381  620795 cri.go:89] found id: ""
	I1213 12:05:51.236414  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.236423  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:51.236429  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:51.236495  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:51.266394  620795 cri.go:89] found id: ""
	I1213 12:05:51.266428  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.266437  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:51.266443  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:51.266509  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:51.293949  620795 cri.go:89] found id: ""
	I1213 12:05:51.293981  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.293991  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:51.293998  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:51.294062  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:51.324019  620795 cri.go:89] found id: ""
	I1213 12:05:51.324042  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.324056  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:51.324062  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:51.324145  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:51.352992  620795 cri.go:89] found id: ""
	I1213 12:05:51.353023  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.353032  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:51.353039  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:51.353098  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:51.378872  620795 cri.go:89] found id: ""
	I1213 12:05:51.378898  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.378907  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:51.378914  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:51.378976  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:51.406670  620795 cri.go:89] found id: ""
	I1213 12:05:51.406695  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.406703  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:51.406713  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:51.406728  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:51.469269  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:51.461277    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.461921    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.463438    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.463899    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.465468    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:51.461277    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.461921    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.463438    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.463899    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.465468    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:51.469290  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:51.469304  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:51.497318  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:51.497352  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:51.534646  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:51.534680  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:51.618348  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:51.618388  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:54.137197  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:54.147708  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:54.147778  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:54.173064  620795 cri.go:89] found id: ""
	I1213 12:05:54.173089  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.173098  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:54.173105  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:54.173164  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:54.198688  620795 cri.go:89] found id: ""
	I1213 12:05:54.198713  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.198723  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:54.198733  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:54.198789  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:54.224472  620795 cri.go:89] found id: ""
	I1213 12:05:54.224497  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.224506  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:54.224512  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:54.224571  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1213 12:05:54.536956  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:56.537169  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:54.254875  620795 cri.go:89] found id: ""
	I1213 12:05:54.254900  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.254909  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:54.254916  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:54.254985  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:54.286287  620795 cri.go:89] found id: ""
	I1213 12:05:54.286314  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.286322  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:54.286329  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:54.286384  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:54.312009  620795 cri.go:89] found id: ""
	I1213 12:05:54.312034  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.312043  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:54.312050  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:54.312109  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:54.338472  620795 cri.go:89] found id: ""
	I1213 12:05:54.338506  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.338516  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:54.338522  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:54.338590  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:54.363767  620795 cri.go:89] found id: ""
	I1213 12:05:54.363791  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.363799  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:54.363810  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:54.363827  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:54.429426  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:54.429462  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:54.446820  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:54.446859  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:54.514113  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:54.505503    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.506092    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.507709    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.508420    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.510180    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:54.505503    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.506092    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.507709    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.508420    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.510180    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:54.514137  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:54.514150  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:54.547597  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:54.547688  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:57.126156  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:57.136777  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:57.136854  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:57.166084  620795 cri.go:89] found id: ""
	I1213 12:05:57.166107  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.166116  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:57.166122  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:57.166180  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:57.194344  620795 cri.go:89] found id: ""
	I1213 12:05:57.194368  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.194377  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:57.194384  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:57.194445  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:57.220264  620795 cri.go:89] found id: ""
	I1213 12:05:57.220289  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.220298  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:57.220305  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:57.220362  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:57.245200  620795 cri.go:89] found id: ""
	I1213 12:05:57.245222  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.245230  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:57.245236  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:57.245292  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:57.272963  620795 cri.go:89] found id: ""
	I1213 12:05:57.272987  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.272996  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:57.273003  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:57.273061  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:57.297916  620795 cri.go:89] found id: ""
	I1213 12:05:57.297940  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.297947  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:57.297954  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:57.298016  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:57.323201  620795 cri.go:89] found id: ""
	I1213 12:05:57.323226  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.323235  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:57.323241  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:57.323301  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:57.348727  620795 cri.go:89] found id: ""
	I1213 12:05:57.348759  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.348769  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:57.348779  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:57.348794  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:57.424991  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:57.416858    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.417506    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.419207    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.419713    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.421359    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:57.416858    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.417506    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.419207    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.419713    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.421359    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:57.425015  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:57.425027  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:57.454618  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:57.454652  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:57.482599  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:57.482627  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:57.556901  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:57.556982  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:05:58.537235  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:01.037253  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:00.078226  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:00.114729  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:00.114815  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:00.214510  620795 cri.go:89] found id: ""
	I1213 12:06:00.214537  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.214547  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:00.214560  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:00.214644  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:00.283401  620795 cri.go:89] found id: ""
	I1213 12:06:00.283433  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.283443  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:00.283450  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:00.283564  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:00.333853  620795 cri.go:89] found id: ""
	I1213 12:06:00.333946  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.333974  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:00.333999  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:00.334124  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:00.370564  620795 cri.go:89] found id: ""
	I1213 12:06:00.370647  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.370670  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:00.370693  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:00.370796  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:00.400318  620795 cri.go:89] found id: ""
	I1213 12:06:00.400355  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.400365  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:00.400373  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:00.400451  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:00.429349  620795 cri.go:89] found id: ""
	I1213 12:06:00.429376  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.429387  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:00.429394  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:00.429480  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:00.457513  620795 cri.go:89] found id: ""
	I1213 12:06:00.457540  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.457549  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:00.457555  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:00.457617  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:00.484050  620795 cri.go:89] found id: ""
	I1213 12:06:00.484077  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.484086  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:00.484096  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:00.484110  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:00.564314  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:00.564357  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:00.586853  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:00.586884  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:00.678609  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:00.670112    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.670780    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.672403    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.672752    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.674443    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:00.670112    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.670780    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.672403    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.672752    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.674443    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:00.678679  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:00.678699  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:00.708726  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:00.708764  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:03.239868  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:03.250271  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:03.250342  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:03.278221  620795 cri.go:89] found id: ""
	I1213 12:06:03.278246  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.278254  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:03.278261  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:03.278323  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:03.307255  620795 cri.go:89] found id: ""
	I1213 12:06:03.307280  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.307288  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:03.307295  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:03.307358  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:03.334371  620795 cri.go:89] found id: ""
	I1213 12:06:03.334394  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.334402  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:03.334408  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:03.334465  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:03.359920  620795 cri.go:89] found id: ""
	I1213 12:06:03.359947  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.359959  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:03.359966  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:03.360026  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:03.388349  620795 cri.go:89] found id: ""
	I1213 12:06:03.388373  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.388382  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:03.388389  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:03.388446  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:03.413684  620795 cri.go:89] found id: ""
	I1213 12:06:03.413712  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.413721  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:03.413727  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:03.413786  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:03.438590  620795 cri.go:89] found id: ""
	I1213 12:06:03.438613  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.438622  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:03.438629  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:03.438686  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:03.466031  620795 cri.go:89] found id: ""
	I1213 12:06:03.466065  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.466074  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:03.466084  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:03.466095  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:03.540002  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:03.540037  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:03.581254  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:03.581285  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:03.657609  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:03.648962    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.649736    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.651545    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.652112    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.653889    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:03.648962    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.649736    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.651545    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.652112    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.653889    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:03.657641  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:03.657654  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:03.686248  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:03.686284  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:06:03.537138  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:05.537188  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:07.537266  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:06.215254  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:06.226059  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:06.226130  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:06.252206  620795 cri.go:89] found id: ""
	I1213 12:06:06.252229  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.252237  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:06.252243  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:06.252306  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:06.282327  620795 cri.go:89] found id: ""
	I1213 12:06:06.282349  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.282358  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:06.282364  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:06.282425  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:06.312866  620795 cri.go:89] found id: ""
	I1213 12:06:06.312889  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.312898  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:06.312905  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:06.312964  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:06.339757  620795 cri.go:89] found id: ""
	I1213 12:06:06.339828  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.339851  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:06.339865  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:06.339937  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:06.366465  620795 cri.go:89] found id: ""
	I1213 12:06:06.366491  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.366508  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:06.366515  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:06.366589  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:06.395704  620795 cri.go:89] found id: ""
	I1213 12:06:06.395727  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.395735  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:06.395742  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:06.395800  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:06.420941  620795 cri.go:89] found id: ""
	I1213 12:06:06.420966  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.420974  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:06.420981  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:06.421040  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:06.446747  620795 cri.go:89] found id: ""
	I1213 12:06:06.446771  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.446781  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:06.446790  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:06.446802  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:06.515396  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:06.515437  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:06.537368  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:06.537458  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:06.638118  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:06.626710    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.630084    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.630705    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.632330    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.632805    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:06.626710    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.630084    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.630705    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.632330    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.632805    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:06.638202  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:06.638230  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:06.668749  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:06.668789  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:09.204205  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:09.214694  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:09.214763  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	W1213 12:06:10.037386  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:12.536953  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:09.240252  620795 cri.go:89] found id: ""
	I1213 12:06:09.240291  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.240301  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:09.240307  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:09.240372  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:09.267161  620795 cri.go:89] found id: ""
	I1213 12:06:09.267188  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.267197  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:09.267203  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:09.267263  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:09.292472  620795 cri.go:89] found id: ""
	I1213 12:06:09.292501  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.292510  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:09.292517  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:09.292581  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:09.317718  620795 cri.go:89] found id: ""
	I1213 12:06:09.317745  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.317754  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:09.317760  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:09.317819  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:09.342979  620795 cri.go:89] found id: ""
	I1213 12:06:09.343006  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.343015  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:09.343021  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:09.343080  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:09.370344  620795 cri.go:89] found id: ""
	I1213 12:06:09.370368  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.370377  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:09.370383  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:09.370441  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:09.397428  620795 cri.go:89] found id: ""
	I1213 12:06:09.397451  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.397461  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:09.397467  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:09.397527  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:09.422862  620795 cri.go:89] found id: ""
	I1213 12:06:09.422890  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.422900  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:09.422909  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:09.422923  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:09.486031  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:09.478519    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.478948    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.480477    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.480972    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.482466    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:09.478519    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.478948    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.480477    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.480972    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.482466    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:09.486057  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:09.486070  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:09.514736  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:09.514772  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:09.586482  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:09.586558  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:09.660422  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:09.660459  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:12.179299  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:12.190230  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:12.190302  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:12.216052  620795 cri.go:89] found id: ""
	I1213 12:06:12.216076  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.216085  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:12.216092  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:12.216150  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:12.245417  620795 cri.go:89] found id: ""
	I1213 12:06:12.245443  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.245453  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:12.245460  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:12.245525  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:12.272357  620795 cri.go:89] found id: ""
	I1213 12:06:12.272382  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.272391  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:12.272397  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:12.272459  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:12.297431  620795 cri.go:89] found id: ""
	I1213 12:06:12.297458  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.297467  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:12.297479  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:12.297537  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:12.322773  620795 cri.go:89] found id: ""
	I1213 12:06:12.322796  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.322805  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:12.322829  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:12.322894  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:12.348212  620795 cri.go:89] found id: ""
	I1213 12:06:12.348278  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.348293  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:12.348301  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:12.348360  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:12.378078  620795 cri.go:89] found id: ""
	I1213 12:06:12.378105  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.378115  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:12.378122  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:12.378186  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:12.403938  620795 cri.go:89] found id: ""
	I1213 12:06:12.404005  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.404029  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:12.404044  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:12.404056  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:12.432395  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:12.432433  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:12.465021  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:12.465055  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:12.533527  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:12.533564  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:12.557847  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:12.557876  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:12.649280  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:12.641558    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.641947    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.643630    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.644072    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.645646    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:12.641558    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.641947    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.643630    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.644072    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.645646    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:06:15.036244  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:17.037163  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:15.150199  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:15.161093  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:15.161164  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:15.188375  620795 cri.go:89] found id: ""
	I1213 12:06:15.188402  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.188411  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:15.188420  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:15.188494  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:15.213569  620795 cri.go:89] found id: ""
	I1213 12:06:15.213592  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.213601  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:15.213607  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:15.213667  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:15.244468  620795 cri.go:89] found id: ""
	I1213 12:06:15.244490  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.244499  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:15.244505  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:15.244565  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:15.269446  620795 cri.go:89] found id: ""
	I1213 12:06:15.269469  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.269478  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:15.269484  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:15.269544  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:15.297921  620795 cri.go:89] found id: ""
	I1213 12:06:15.297947  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.297957  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:15.297965  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:15.298029  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:15.323225  620795 cri.go:89] found id: ""
	I1213 12:06:15.323248  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.323256  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:15.323263  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:15.323322  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:15.349965  620795 cri.go:89] found id: ""
	I1213 12:06:15.349988  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.349999  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:15.350005  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:15.350067  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:15.378207  620795 cri.go:89] found id: ""
	I1213 12:06:15.378236  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.378247  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:15.378258  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:15.378271  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:15.443150  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:15.443182  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:15.459353  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:15.459388  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:15.546545  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:15.517236    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.519883    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.520609    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.528433    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.536550    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:15.517236    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.519883    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.520609    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.528433    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.536550    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:15.546611  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:15.546638  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:15.582173  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:15.582258  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:18.126037  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:18.137115  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:18.137190  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:18.164991  620795 cri.go:89] found id: ""
	I1213 12:06:18.165017  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.165026  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:18.165033  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:18.165092  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:18.191806  620795 cri.go:89] found id: ""
	I1213 12:06:18.191832  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.191841  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:18.191848  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:18.191906  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:18.222284  620795 cri.go:89] found id: ""
	I1213 12:06:18.222310  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.222320  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:18.222329  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:18.222389  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:18.250305  620795 cri.go:89] found id: ""
	I1213 12:06:18.250332  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.250342  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:18.250348  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:18.250406  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:18.276798  620795 cri.go:89] found id: ""
	I1213 12:06:18.276823  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.276833  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:18.276841  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:18.276901  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:18.301916  620795 cri.go:89] found id: ""
	I1213 12:06:18.301943  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.301952  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:18.301959  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:18.302017  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:18.327545  620795 cri.go:89] found id: ""
	I1213 12:06:18.327569  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.327577  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:18.327584  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:18.327681  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:18.352817  620795 cri.go:89] found id: ""
	I1213 12:06:18.352844  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.352854  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:18.352863  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:18.352902  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:18.418564  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:18.418601  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:18.434897  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:18.434928  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:18.499340  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:18.490649    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.491423    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.492978    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.493531    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.495112    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:18.490649    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.491423    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.492978    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.493531    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.495112    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:18.499366  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:18.499380  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:18.528897  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:18.528980  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:06:19.537261  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:22.037303  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:21.104122  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:21.114671  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:21.114786  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:21.140990  620795 cri.go:89] found id: ""
	I1213 12:06:21.141014  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.141024  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:21.141030  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:21.141087  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:21.168480  620795 cri.go:89] found id: ""
	I1213 12:06:21.168510  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.168519  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:21.168526  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:21.168583  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:21.193893  620795 cri.go:89] found id: ""
	I1213 12:06:21.193916  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.193924  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:21.193930  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:21.193985  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:21.222789  620795 cri.go:89] found id: ""
	I1213 12:06:21.222811  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.222820  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:21.222827  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:21.222885  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:21.254379  620795 cri.go:89] found id: ""
	I1213 12:06:21.254402  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.254411  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:21.254417  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:21.254476  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:21.280020  620795 cri.go:89] found id: ""
	I1213 12:06:21.280049  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.280058  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:21.280065  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:21.280123  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:21.305920  620795 cri.go:89] found id: ""
	I1213 12:06:21.305942  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.305952  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:21.305957  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:21.306031  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:21.334376  620795 cri.go:89] found id: ""
	I1213 12:06:21.334400  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.334409  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:21.334417  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:21.334429  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:21.362868  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:21.362906  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:21.397678  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:21.397727  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:21.465535  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:21.465574  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:21.482417  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:21.482443  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:21.566636  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:21.557499    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.558882    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.559834    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.561441    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.561752    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:21.557499    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.558882    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.559834    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.561441    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.561752    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:24.068339  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:24.079607  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:24.079684  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:24.105575  620795 cri.go:89] found id: ""
	I1213 12:06:24.105609  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.105619  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:24.105626  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:24.105696  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:24.131798  620795 cri.go:89] found id: ""
	I1213 12:06:24.131830  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.131840  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:24.131846  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:24.131905  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:24.157068  620795 cri.go:89] found id: ""
	I1213 12:06:24.157096  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.157106  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:24.157113  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:24.157168  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:24.186737  620795 cri.go:89] found id: ""
	I1213 12:06:24.186762  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.186772  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:24.186779  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:24.186843  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:24.214700  620795 cri.go:89] found id: ""
	I1213 12:06:24.214726  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.214745  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:24.214751  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:24.214815  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	W1213 12:06:24.537013  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:27.037104  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:24.242048  620795 cri.go:89] found id: ""
	I1213 12:06:24.242074  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.242083  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:24.242090  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:24.242180  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:24.270953  620795 cri.go:89] found id: ""
	I1213 12:06:24.270978  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.270987  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:24.270994  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:24.271074  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:24.296220  620795 cri.go:89] found id: ""
	I1213 12:06:24.296246  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.296256  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:24.296267  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:24.296278  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:24.325330  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:24.325367  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:24.355217  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:24.355255  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:24.421526  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:24.421566  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:24.438978  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:24.439012  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:24.514169  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:24.505564    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.506202    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.507961    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.508730    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.510229    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:24.505564    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.506202    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.507961    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.508730    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.510229    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:27.015192  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:27.026779  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:27.026871  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:27.054321  620795 cri.go:89] found id: ""
	I1213 12:06:27.054347  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.054357  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:27.054364  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:27.054423  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:27.084443  620795 cri.go:89] found id: ""
	I1213 12:06:27.084467  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.084476  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:27.084482  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:27.084542  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:27.110224  620795 cri.go:89] found id: ""
	I1213 12:06:27.110251  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.110260  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:27.110267  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:27.110326  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:27.141821  620795 cri.go:89] found id: ""
	I1213 12:06:27.141847  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.141857  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:27.141863  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:27.141953  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:27.168110  620795 cri.go:89] found id: ""
	I1213 12:06:27.168143  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.168153  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:27.168160  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:27.168228  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:27.193708  620795 cri.go:89] found id: ""
	I1213 12:06:27.193775  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.193791  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:27.193802  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:27.193862  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:27.220542  620795 cri.go:89] found id: ""
	I1213 12:06:27.220569  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.220578  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:27.220585  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:27.220673  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:27.248536  620795 cri.go:89] found id: ""
	I1213 12:06:27.248614  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.248630  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:27.248641  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:27.248653  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:27.314354  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:27.314389  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:27.331795  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:27.331824  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:27.397269  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:27.389020    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.389779    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.391484    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.391978    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.393471    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:27.389020    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.389779    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.391484    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.391978    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.393471    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:27.397290  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:27.397303  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:27.425995  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:27.426034  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:06:29.537185  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:32.037043  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:29.964336  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:29.975190  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:29.975264  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:30.020235  620795 cri.go:89] found id: ""
	I1213 12:06:30.020330  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.020353  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:30.020373  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:30.020492  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:30.064384  620795 cri.go:89] found id: ""
	I1213 12:06:30.064422  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.064431  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:30.064438  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:30.064537  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:30.093930  620795 cri.go:89] found id: ""
	I1213 12:06:30.093974  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.094003  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:30.094018  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:30.094092  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:30.121799  620795 cri.go:89] found id: ""
	I1213 12:06:30.121830  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.121846  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:30.121854  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:30.121994  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:30.150127  620795 cri.go:89] found id: ""
	I1213 12:06:30.150153  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.150163  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:30.150170  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:30.150232  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:30.177848  620795 cri.go:89] found id: ""
	I1213 12:06:30.177873  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.177883  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:30.177889  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:30.177948  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:30.204179  620795 cri.go:89] found id: ""
	I1213 12:06:30.204216  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.204225  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:30.204235  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:30.204295  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:30.230625  620795 cri.go:89] found id: ""
	I1213 12:06:30.230653  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.230663  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:30.230673  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:30.230685  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:30.297598  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:30.297634  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:30.314962  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:30.314993  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:30.380114  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:30.371745    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.372555    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.374185    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.374477    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.376001    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:30.371745    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.372555    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.374185    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.374477    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.376001    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:30.380136  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:30.380148  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:30.408485  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:30.408523  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:32.936773  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:32.947334  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:32.947408  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:32.974265  620795 cri.go:89] found id: ""
	I1213 12:06:32.974291  620795 logs.go:282] 0 containers: []
	W1213 12:06:32.974300  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:32.974307  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:32.974365  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:33.005585  620795 cri.go:89] found id: ""
	I1213 12:06:33.005616  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.005627  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:33.005633  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:33.005704  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:33.036036  620795 cri.go:89] found id: ""
	I1213 12:06:33.036058  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.036072  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:33.036079  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:33.036136  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:33.062415  620795 cri.go:89] found id: ""
	I1213 12:06:33.062439  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.062448  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:33.062455  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:33.062515  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:33.091004  620795 cri.go:89] found id: ""
	I1213 12:06:33.091072  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.091095  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:33.091115  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:33.091193  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:33.116964  620795 cri.go:89] found id: ""
	I1213 12:06:33.116989  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.116999  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:33.117005  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:33.117084  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:33.143886  620795 cri.go:89] found id: ""
	I1213 12:06:33.143908  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.143918  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:33.143924  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:33.143984  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:33.177672  620795 cri.go:89] found id: ""
	I1213 12:06:33.177697  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.177707  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:33.177716  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:33.177728  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:33.194235  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:33.194266  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:33.258679  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:33.250574    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.251172    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.252678    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.253209    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.254656    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:33.250574    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.251172    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.252678    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.253209    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.254656    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:33.258703  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:33.258715  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:33.287694  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:33.287731  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:33.319142  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:33.319168  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:06:34.037106  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:36.037218  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:35.883653  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:35.894470  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:35.894540  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:35.922164  620795 cri.go:89] found id: ""
	I1213 12:06:35.922243  620795 logs.go:282] 0 containers: []
	W1213 12:06:35.922268  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:35.922286  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:35.922378  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:35.948794  620795 cri.go:89] found id: ""
	I1213 12:06:35.948824  620795 logs.go:282] 0 containers: []
	W1213 12:06:35.948833  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:35.948840  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:35.948916  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:35.976985  620795 cri.go:89] found id: ""
	I1213 12:06:35.977012  620795 logs.go:282] 0 containers: []
	W1213 12:06:35.977023  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:35.977030  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:35.977097  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:36.008179  620795 cri.go:89] found id: ""
	I1213 12:06:36.008210  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.008221  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:36.008229  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:36.008306  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:36.037414  620795 cri.go:89] found id: ""
	I1213 12:06:36.037434  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.037442  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:36.037448  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:36.037505  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:36.066253  620795 cri.go:89] found id: ""
	I1213 12:06:36.066290  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.066304  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:36.066319  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:36.066394  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:36.093841  620795 cri.go:89] found id: ""
	I1213 12:06:36.093938  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.093955  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:36.093963  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:36.094042  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:36.119692  620795 cri.go:89] found id: ""
	I1213 12:06:36.119728  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.119737  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:36.119747  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:36.119761  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:36.136247  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:36.136322  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:36.202464  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:36.194729    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.195344    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.196865    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.197429    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.198995    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:36.194729    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.195344    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.196865    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.197429    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.198995    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:36.202486  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:36.202500  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:36.230571  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:36.230606  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:36.257928  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:36.257955  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:38.826068  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:38.841833  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:38.841915  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:38.871763  620795 cri.go:89] found id: ""
	I1213 12:06:38.871788  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.871797  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:38.871803  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:38.871870  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:38.897931  620795 cri.go:89] found id: ""
	I1213 12:06:38.897956  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.897966  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:38.897972  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:38.898064  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:38.928095  620795 cri.go:89] found id: ""
	I1213 12:06:38.928121  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.928131  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:38.928138  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:38.928202  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:38.954066  620795 cri.go:89] found id: ""
	I1213 12:06:38.954090  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.954098  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:38.954105  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:38.954168  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:38.978723  620795 cri.go:89] found id: ""
	I1213 12:06:38.978752  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.978762  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:38.978769  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:38.978825  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:39.006341  620795 cri.go:89] found id: ""
	I1213 12:06:39.006374  620795 logs.go:282] 0 containers: []
	W1213 12:06:39.006383  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:39.006390  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:39.006462  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:39.032585  620795 cri.go:89] found id: ""
	I1213 12:06:39.032612  620795 logs.go:282] 0 containers: []
	W1213 12:06:39.032622  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:39.032629  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:39.032699  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:39.061395  620795 cri.go:89] found id: ""
	I1213 12:06:39.061426  620795 logs.go:282] 0 containers: []
	W1213 12:06:39.061436  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:39.061446  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:39.061457  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:39.091343  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:39.091367  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:39.160940  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:39.160987  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:39.177451  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:39.177490  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:38.536279  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:40.537278  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:43.037128  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:39.246489  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:39.238660    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.239263    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.241330    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.241646    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.243151    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:39.238660    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.239263    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.241330    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.241646    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.243151    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:39.246510  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:39.246524  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:41.775639  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:41.794476  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:41.794600  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:41.831000  620795 cri.go:89] found id: ""
	I1213 12:06:41.831074  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.831102  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:41.831121  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:41.831203  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:41.872779  620795 cri.go:89] found id: ""
	I1213 12:06:41.872806  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.872816  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:41.872823  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:41.872903  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:41.902394  620795 cri.go:89] found id: ""
	I1213 12:06:41.902420  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.902429  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:41.902435  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:41.902494  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:41.929459  620795 cri.go:89] found id: ""
	I1213 12:06:41.929485  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.929494  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:41.929501  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:41.929563  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:41.955676  620795 cri.go:89] found id: ""
	I1213 12:06:41.955700  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.955716  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:41.955724  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:41.955783  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:41.981839  620795 cri.go:89] found id: ""
	I1213 12:06:41.981865  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.981875  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:41.981882  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:41.981939  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:42.021720  620795 cri.go:89] found id: ""
	I1213 12:06:42.021808  620795 logs.go:282] 0 containers: []
	W1213 12:06:42.021827  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:42.021836  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:42.021908  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:42.052304  620795 cri.go:89] found id: ""
	I1213 12:06:42.052332  620795 logs.go:282] 0 containers: []
	W1213 12:06:42.052341  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:42.052351  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:42.052382  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:42.071214  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:42.071250  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:42.151103  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:42.141536    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.142506    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.144362    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.144822    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.146635    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:42.141536    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.142506    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.144362    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.144822    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.146635    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:42.151127  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:42.151146  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:42.183473  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:42.183646  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:42.226797  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:42.226834  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:06:45.037308  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:47.537265  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:44.796943  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:44.821281  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:44.821413  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:44.863598  620795 cri.go:89] found id: ""
	I1213 12:06:44.863672  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.863697  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:44.863718  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:44.863805  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:44.892309  620795 cri.go:89] found id: ""
	I1213 12:06:44.892395  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.892418  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:44.892438  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:44.892552  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:44.918444  620795 cri.go:89] found id: ""
	I1213 12:06:44.918522  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.918557  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:44.918581  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:44.918673  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:44.944223  620795 cri.go:89] found id: ""
	I1213 12:06:44.944249  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.944258  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:44.944265  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:44.944327  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:44.970515  620795 cri.go:89] found id: ""
	I1213 12:06:44.970548  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.970559  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:44.970566  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:44.970626  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:44.996938  620795 cri.go:89] found id: ""
	I1213 12:06:44.996966  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.996976  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:44.996983  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:44.997050  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:45.050971  620795 cri.go:89] found id: ""
	I1213 12:06:45.051001  620795 logs.go:282] 0 containers: []
	W1213 12:06:45.051020  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:45.051028  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:45.051107  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:45.095037  620795 cri.go:89] found id: ""
	I1213 12:06:45.095076  620795 logs.go:282] 0 containers: []
	W1213 12:06:45.095087  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:45.095098  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:45.095116  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:45.209528  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:45.209618  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:45.240275  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:45.240311  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:45.322872  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:45.312425    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.313157    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.314727    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.315938    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.316890    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:45.312425    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.313157    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.314727    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.315938    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.316890    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:45.322895  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:45.322909  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:45.353126  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:45.353162  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:47.883672  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:47.894317  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:47.894394  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:47.920883  620795 cri.go:89] found id: ""
	I1213 12:06:47.920909  620795 logs.go:282] 0 containers: []
	W1213 12:06:47.920919  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:47.920927  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:47.920985  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:47.947168  620795 cri.go:89] found id: ""
	I1213 12:06:47.947197  620795 logs.go:282] 0 containers: []
	W1213 12:06:47.947207  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:47.947214  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:47.947279  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:47.972678  620795 cri.go:89] found id: ""
	I1213 12:06:47.972701  620795 logs.go:282] 0 containers: []
	W1213 12:06:47.972710  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:47.972717  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:47.972779  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:48.010849  620795 cri.go:89] found id: ""
	I1213 12:06:48.010915  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.010939  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:48.010961  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:48.011038  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:48.040005  620795 cri.go:89] found id: ""
	I1213 12:06:48.040074  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.040098  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:48.040118  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:48.040211  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:48.067778  620795 cri.go:89] found id: ""
	I1213 12:06:48.067806  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.067815  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:48.067822  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:48.067884  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:48.096165  620795 cri.go:89] found id: ""
	I1213 12:06:48.096207  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.096218  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:48.096224  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:48.096297  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:48.123725  620795 cri.go:89] found id: ""
	I1213 12:06:48.123761  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.123771  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:48.123781  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:48.123793  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:48.153693  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:48.153733  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:48.185148  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:48.185227  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:48.251689  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:48.251724  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:48.269048  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:48.269079  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:48.336435  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:48.328704    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.329312    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.330862    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.331331    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.332839    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:48.328704    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.329312    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.330862    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.331331    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.332839    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:06:50.037084  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:52.037310  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:50.836744  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:50.848522  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:50.848593  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:50.874981  620795 cri.go:89] found id: ""
	I1213 12:06:50.875065  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.875088  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:50.875108  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:50.875219  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:50.900176  620795 cri.go:89] found id: ""
	I1213 12:06:50.900203  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.900213  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:50.900219  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:50.900277  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:50.929844  620795 cri.go:89] found id: ""
	I1213 12:06:50.929869  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.929878  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:50.929885  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:50.929943  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:50.955008  620795 cri.go:89] found id: ""
	I1213 12:06:50.955033  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.955042  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:50.955049  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:50.955104  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:50.982109  620795 cri.go:89] found id: ""
	I1213 12:06:50.982134  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.982143  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:50.982149  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:50.982211  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:51.013066  620795 cri.go:89] found id: ""
	I1213 12:06:51.013144  620795 logs.go:282] 0 containers: []
	W1213 12:06:51.013160  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:51.013168  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:51.013236  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:51.042207  620795 cri.go:89] found id: ""
	I1213 12:06:51.042233  620795 logs.go:282] 0 containers: []
	W1213 12:06:51.042243  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:51.042250  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:51.042315  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:51.068089  620795 cri.go:89] found id: ""
	I1213 12:06:51.068116  620795 logs.go:282] 0 containers: []
	W1213 12:06:51.068125  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:51.068135  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:51.068146  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:51.136510  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:51.136550  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:51.153539  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:51.153567  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:51.227168  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:51.219231    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.219823    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.221668    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.222081    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.223742    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:51.219231    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.219823    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.221668    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.222081    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.223742    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:51.227240  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:51.227271  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:51.256505  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:51.256541  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:53.786599  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:53.808412  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:53.808498  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:53.866097  620795 cri.go:89] found id: ""
	I1213 12:06:53.866124  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.866133  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:53.866140  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:53.866197  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:53.896398  620795 cri.go:89] found id: ""
	I1213 12:06:53.896426  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.896435  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:53.896442  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:53.896499  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:53.922228  620795 cri.go:89] found id: ""
	I1213 12:06:53.922255  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.922265  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:53.922271  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:53.922333  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:53.947081  620795 cri.go:89] found id: ""
	I1213 12:06:53.947107  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.947116  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:53.947123  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:53.947177  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:53.972340  620795 cri.go:89] found id: ""
	I1213 12:06:53.972365  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.972374  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:53.972381  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:53.972437  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:54.000806  620795 cri.go:89] found id: ""
	I1213 12:06:54.000835  620795 logs.go:282] 0 containers: []
	W1213 12:06:54.000844  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:54.000851  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:54.000925  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:54.030584  620795 cri.go:89] found id: ""
	I1213 12:06:54.030617  620795 logs.go:282] 0 containers: []
	W1213 12:06:54.030626  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:54.030648  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:54.030734  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:54.056807  620795 cri.go:89] found id: ""
	I1213 12:06:54.056833  620795 logs.go:282] 0 containers: []
	W1213 12:06:54.056842  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:54.056877  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:54.056897  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:54.122299  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:54.122347  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:54.139911  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:54.139944  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:54.202433  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:54.194761    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.195486    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.197123    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.197444    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.198946    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:54.194761    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.195486    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.197123    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.197444    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.198946    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:54.202453  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:54.202466  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:54.230939  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:54.230977  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:06:54.536621  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:56.537197  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:56.761244  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:56.773199  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:56.773280  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:56.833295  620795 cri.go:89] found id: ""
	I1213 12:06:56.833323  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.833338  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:56.833345  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:56.833410  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:56.877141  620795 cri.go:89] found id: ""
	I1213 12:06:56.877179  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.877189  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:56.877195  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:56.877255  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:56.909304  620795 cri.go:89] found id: ""
	I1213 12:06:56.909329  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.909337  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:56.909344  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:56.909402  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:56.937175  620795 cri.go:89] found id: ""
	I1213 12:06:56.937206  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.937215  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:56.937222  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:56.937283  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:56.962816  620795 cri.go:89] found id: ""
	I1213 12:06:56.962839  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.962848  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:56.962854  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:56.962909  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:56.988340  620795 cri.go:89] found id: ""
	I1213 12:06:56.988364  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.988372  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:56.988379  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:56.988438  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:57.014873  620795 cri.go:89] found id: ""
	I1213 12:06:57.014956  620795 logs.go:282] 0 containers: []
	W1213 12:06:57.014979  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:57.014997  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:57.015107  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:57.042222  620795 cri.go:89] found id: ""
	I1213 12:06:57.042295  620795 logs.go:282] 0 containers: []
	W1213 12:06:57.042331  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:57.042357  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:57.042383  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:57.070110  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:57.070148  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:57.097788  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:57.097812  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:57.164029  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:57.164067  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:57.182586  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:57.182619  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:57.253568  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:57.245349    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.246144    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.247745    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.248303    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.249920    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:57.245349    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.246144    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.247745    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.248303    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.249920    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:06:59.037110  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:01.537092  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:59.753877  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:59.764872  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:59.764943  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:59.794978  620795 cri.go:89] found id: ""
	I1213 12:06:59.795002  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.795016  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:59.795027  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:59.795086  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:59.832235  620795 cri.go:89] found id: ""
	I1213 12:06:59.832264  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.832276  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:59.832283  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:59.832342  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:59.879189  620795 cri.go:89] found id: ""
	I1213 12:06:59.879217  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.879227  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:59.879233  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:59.879296  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:59.906738  620795 cri.go:89] found id: ""
	I1213 12:06:59.906766  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.906775  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:59.906782  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:59.906838  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:59.934746  620795 cri.go:89] found id: ""
	I1213 12:06:59.934774  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.934783  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:59.934790  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:59.934852  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:59.962016  620795 cri.go:89] found id: ""
	I1213 12:06:59.962049  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.962059  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:59.962066  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:59.962123  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:59.988024  620795 cri.go:89] found id: ""
	I1213 12:06:59.988047  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.988056  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:59.988062  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:59.988118  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:00.062022  620795 cri.go:89] found id: ""
	I1213 12:07:00.062049  620795 logs.go:282] 0 containers: []
	W1213 12:07:00.062059  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:00.062076  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:00.062094  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:00.179599  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:00.181365  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:00.211914  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:00.211958  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:00.303311  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:00.290980    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.291674    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.293924    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.295005    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.295928    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:00.290980    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.291674    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.293924    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.295005    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.295928    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:00.303333  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:00.303347  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:00.339996  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:00.340039  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:02.882696  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:02.898926  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:02.899000  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:02.928919  620795 cri.go:89] found id: ""
	I1213 12:07:02.928949  620795 logs.go:282] 0 containers: []
	W1213 12:07:02.928959  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:02.928967  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:02.929030  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:02.955168  620795 cri.go:89] found id: ""
	I1213 12:07:02.955194  620795 logs.go:282] 0 containers: []
	W1213 12:07:02.955209  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:02.955215  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:02.955273  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:02.984105  620795 cri.go:89] found id: ""
	I1213 12:07:02.984132  620795 logs.go:282] 0 containers: []
	W1213 12:07:02.984141  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:02.984159  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:02.984220  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:03.011185  620795 cri.go:89] found id: ""
	I1213 12:07:03.011210  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.011219  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:03.011227  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:03.011289  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:03.038557  620795 cri.go:89] found id: ""
	I1213 12:07:03.038580  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.038588  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:03.038594  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:03.038656  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:03.064610  620795 cri.go:89] found id: ""
	I1213 12:07:03.064650  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.064661  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:03.064667  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:03.064725  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:03.090406  620795 cri.go:89] found id: ""
	I1213 12:07:03.090432  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.090441  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:03.090447  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:03.090506  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:03.117733  620795 cri.go:89] found id: ""
	I1213 12:07:03.117761  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.117770  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:03.117780  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:03.117792  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:03.185975  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:03.177634    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.178390    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.180015    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.180554    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.182089    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:03.177634    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.178390    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.180015    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.180554    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.182089    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:03.185999  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:03.186011  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:03.214353  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:03.214387  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:03.244844  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:03.244873  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:03.310569  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:03.310608  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:07:04.037144  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:06.537015  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:05.828010  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:05.840499  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:05.840570  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:05.867194  620795 cri.go:89] found id: ""
	I1213 12:07:05.867272  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.867295  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:05.867314  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:05.867394  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:05.894013  620795 cri.go:89] found id: ""
	I1213 12:07:05.894044  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.894054  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:05.894061  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:05.894126  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:05.920207  620795 cri.go:89] found id: ""
	I1213 12:07:05.920234  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.920244  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:05.920250  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:05.920309  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:05.948255  620795 cri.go:89] found id: ""
	I1213 12:07:05.948280  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.948289  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:05.948295  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:05.948352  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:05.975137  620795 cri.go:89] found id: ""
	I1213 12:07:05.975162  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.975211  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:05.975222  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:05.975283  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:06.006992  620795 cri.go:89] found id: ""
	I1213 12:07:06.007020  620795 logs.go:282] 0 containers: []
	W1213 12:07:06.007030  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:06.007037  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:06.007106  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:06.035032  620795 cri.go:89] found id: ""
	I1213 12:07:06.035067  620795 logs.go:282] 0 containers: []
	W1213 12:07:06.035077  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:06.035084  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:06.035157  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:06.066833  620795 cri.go:89] found id: ""
	I1213 12:07:06.066865  620795 logs.go:282] 0 containers: []
	W1213 12:07:06.066875  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:06.066885  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:06.066899  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:06.134254  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:06.125473    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.125887    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.127536    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.128260    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.129881    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:06.125473    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.125887    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.127536    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.128260    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.129881    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:06.134284  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:06.134297  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:06.163816  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:06.163852  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:06.194055  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:06.194084  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:06.262450  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:06.262550  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:08.779798  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:08.793568  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:08.793654  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:08.848358  620795 cri.go:89] found id: ""
	I1213 12:07:08.848399  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.848408  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:08.848415  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:08.848485  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:08.881239  620795 cri.go:89] found id: ""
	I1213 12:07:08.881268  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.881278  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:08.881284  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:08.881358  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:08.912007  620795 cri.go:89] found id: ""
	I1213 12:07:08.912038  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.912059  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:08.912070  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:08.912143  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:08.948718  620795 cri.go:89] found id: ""
	I1213 12:07:08.948744  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.948754  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:08.948760  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:08.948815  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:08.974195  620795 cri.go:89] found id: ""
	I1213 12:07:08.974224  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.974234  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:08.974240  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:08.974298  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:09.000368  620795 cri.go:89] found id: ""
	I1213 12:07:09.000409  620795 logs.go:282] 0 containers: []
	W1213 12:07:09.000420  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:09.000428  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:09.000500  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:09.027504  620795 cri.go:89] found id: ""
	I1213 12:07:09.027539  620795 logs.go:282] 0 containers: []
	W1213 12:07:09.027548  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:09.027554  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:09.027611  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:09.052844  620795 cri.go:89] found id: ""
	I1213 12:07:09.052870  620795 logs.go:282] 0 containers: []
	W1213 12:07:09.052879  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:09.052888  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:09.052899  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:09.080443  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:09.080483  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:09.109721  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:09.109747  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:09.174545  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:09.174581  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:09.192943  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:09.192974  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:09.036994  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:11.537211  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:09.256162  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:09.248263    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.248774    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.250435    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.251054    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.252736    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:09.248263    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.248774    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.250435    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.251054    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.252736    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:11.756459  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:11.766714  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:11.766784  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:11.797701  620795 cri.go:89] found id: ""
	I1213 12:07:11.797728  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.797737  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:11.797753  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:11.797832  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:11.833489  620795 cri.go:89] found id: ""
	I1213 12:07:11.833563  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.833585  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:11.833604  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:11.833692  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:11.869283  620795 cri.go:89] found id: ""
	I1213 12:07:11.869305  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.869314  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:11.869320  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:11.869376  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:11.899820  620795 cri.go:89] found id: ""
	I1213 12:07:11.899845  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.899855  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:11.899862  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:11.899925  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:11.926125  620795 cri.go:89] found id: ""
	I1213 12:07:11.926150  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.926159  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:11.926166  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:11.926224  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:11.952049  620795 cri.go:89] found id: ""
	I1213 12:07:11.952131  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.952165  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:11.952178  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:11.952250  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:11.982382  620795 cri.go:89] found id: ""
	I1213 12:07:11.982407  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.982415  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:11.982421  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:11.982494  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:12.014887  620795 cri.go:89] found id: ""
	I1213 12:07:12.014912  620795 logs.go:282] 0 containers: []
	W1213 12:07:12.014921  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:12.014931  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:12.014943  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:12.080370  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:12.080407  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:12.097493  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:12.097534  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:12.163658  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:12.155544    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.156277    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.157926    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.158224    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.159755    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:12.155544    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.156277    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.157926    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.158224    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.159755    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:12.163680  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:12.163692  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:12.192505  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:12.192544  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:07:14.037223  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:16.537169  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:14.721085  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:14.731999  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:14.732070  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:14.758997  620795 cri.go:89] found id: ""
	I1213 12:07:14.759023  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.759032  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:14.759039  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:14.759098  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:14.831264  620795 cri.go:89] found id: ""
	I1213 12:07:14.831294  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.831303  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:14.831310  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:14.831366  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:14.882934  620795 cri.go:89] found id: ""
	I1213 12:07:14.882964  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.882973  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:14.882980  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:14.883040  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:14.916858  620795 cri.go:89] found id: ""
	I1213 12:07:14.916888  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.916898  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:14.916905  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:14.916969  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:14.942297  620795 cri.go:89] found id: ""
	I1213 12:07:14.942334  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.942343  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:14.942355  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:14.942431  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:14.967905  620795 cri.go:89] found id: ""
	I1213 12:07:14.967927  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.967936  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:14.967942  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:14.968000  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:14.993041  620795 cri.go:89] found id: ""
	I1213 12:07:14.993107  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.993131  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:14.993145  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:14.993224  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:15.027730  620795 cri.go:89] found id: ""
	I1213 12:07:15.027755  620795 logs.go:282] 0 containers: []
	W1213 12:07:15.027765  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:15.027776  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:15.027789  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:15.095470  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:15.095507  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:15.113485  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:15.113567  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:15.183456  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:15.174486    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.175343    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.177179    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.177821    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.179398    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:15.174486    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.175343    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.177179    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.177821    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.179398    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:15.183481  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:15.183497  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:15.212670  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:15.212706  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:17.745028  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:17.755868  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:17.755965  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:17.830528  620795 cri.go:89] found id: ""
	I1213 12:07:17.830551  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.830559  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:17.830585  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:17.830654  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:17.866003  620795 cri.go:89] found id: ""
	I1213 12:07:17.866029  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.866038  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:17.866044  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:17.866102  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:17.891564  620795 cri.go:89] found id: ""
	I1213 12:07:17.891588  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.891597  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:17.891603  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:17.891664  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:17.918740  620795 cri.go:89] found id: ""
	I1213 12:07:17.918768  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.918776  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:17.918783  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:17.918845  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:17.950736  620795 cri.go:89] found id: ""
	I1213 12:07:17.950774  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.950784  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:17.950790  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:17.950854  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:17.976775  620795 cri.go:89] found id: ""
	I1213 12:07:17.976799  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.976809  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:17.976816  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:17.976883  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:18.008430  620795 cri.go:89] found id: ""
	I1213 12:07:18.008460  620795 logs.go:282] 0 containers: []
	W1213 12:07:18.008469  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:18.008477  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:18.008564  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:18.037446  620795 cri.go:89] found id: ""
	I1213 12:07:18.037477  620795 logs.go:282] 0 containers: []
	W1213 12:07:18.037488  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:18.037502  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:18.037517  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:18.068414  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:18.068443  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:18.138588  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:18.138627  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:18.155698  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:18.155729  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:18.222792  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:18.215479    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.215981    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.217571    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.217896    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.219409    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:18.215479    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.215981    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.217571    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.217896    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.219409    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:18.222835  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:18.222847  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:07:19.037064  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:21.536199  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:20.751476  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:20.762121  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:20.762190  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:20.818771  620795 cri.go:89] found id: ""
	I1213 12:07:20.818794  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.818803  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:20.818810  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:20.818877  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:20.873533  620795 cri.go:89] found id: ""
	I1213 12:07:20.873556  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.873564  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:20.873581  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:20.873639  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:20.900689  620795 cri.go:89] found id: ""
	I1213 12:07:20.900716  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.900725  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:20.900732  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:20.900790  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:20.926298  620795 cri.go:89] found id: ""
	I1213 12:07:20.926324  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.926334  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:20.926340  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:20.926400  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:20.955692  620795 cri.go:89] found id: ""
	I1213 12:07:20.955767  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.955789  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:20.955808  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:20.955904  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:20.981101  620795 cri.go:89] found id: ""
	I1213 12:07:20.981126  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.981135  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:20.981146  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:20.981208  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:21.012906  620795 cri.go:89] found id: ""
	I1213 12:07:21.012933  620795 logs.go:282] 0 containers: []
	W1213 12:07:21.012942  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:21.012949  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:21.013024  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:21.043717  620795 cri.go:89] found id: ""
	I1213 12:07:21.043743  620795 logs.go:282] 0 containers: []
	W1213 12:07:21.043753  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:21.043764  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:21.043776  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:21.116319  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:21.116368  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:21.133173  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:21.133204  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:21.201103  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:21.193228    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.194101    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.195701    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.196170    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.197510    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:21.193228    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.194101    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.195701    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.196170    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.197510    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:21.201127  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:21.201140  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:21.229422  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:21.229457  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:23.763349  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:23.781088  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:23.781159  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:23.857623  620795 cri.go:89] found id: ""
	I1213 12:07:23.857648  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.857666  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:23.857673  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:23.857736  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:23.882807  620795 cri.go:89] found id: ""
	I1213 12:07:23.882833  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.882842  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:23.882849  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:23.882907  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:23.908402  620795 cri.go:89] found id: ""
	I1213 12:07:23.908430  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.908440  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:23.908447  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:23.908506  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:23.933800  620795 cri.go:89] found id: ""
	I1213 12:07:23.933826  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.933835  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:23.933841  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:23.933919  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:23.959222  620795 cri.go:89] found id: ""
	I1213 12:07:23.959248  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.959259  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:23.959266  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:23.959352  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:23.985470  620795 cri.go:89] found id: ""
	I1213 12:07:23.985496  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.985505  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:23.985512  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:23.985570  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:24.014442  620795 cri.go:89] found id: ""
	I1213 12:07:24.014477  620795 logs.go:282] 0 containers: []
	W1213 12:07:24.014487  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:24.014494  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:24.014556  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:24.043282  620795 cri.go:89] found id: ""
	I1213 12:07:24.043308  620795 logs.go:282] 0 containers: []
	W1213 12:07:24.043318  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:24.043328  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:24.043340  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:24.075046  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:24.075073  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:24.143658  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:24.143701  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:24.160736  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:24.160765  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:24.224652  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:24.215949    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.216643    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.218385    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.218972    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.220693    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:24.215949    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.216643    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.218385    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.218972    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.220693    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:24.224675  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:24.224692  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:07:23.536309  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:25.537129  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:28.037200  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:26.754848  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:26.765356  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:26.765429  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:26.818982  620795 cri.go:89] found id: ""
	I1213 12:07:26.819005  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.819013  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:26.819020  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:26.819078  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:26.871231  620795 cri.go:89] found id: ""
	I1213 12:07:26.871253  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.871262  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:26.871268  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:26.871326  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:26.898363  620795 cri.go:89] found id: ""
	I1213 12:07:26.898443  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.898467  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:26.898486  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:26.898578  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:26.923840  620795 cri.go:89] found id: ""
	I1213 12:07:26.923866  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.923875  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:26.923882  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:26.923940  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:26.952921  620795 cri.go:89] found id: ""
	I1213 12:07:26.952950  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.952960  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:26.952967  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:26.953028  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:26.984162  620795 cri.go:89] found id: ""
	I1213 12:07:26.984188  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.984197  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:26.984203  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:26.984282  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:27.022329  620795 cri.go:89] found id: ""
	I1213 12:07:27.022397  620795 logs.go:282] 0 containers: []
	W1213 12:07:27.022413  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:27.022420  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:27.022479  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:27.048366  620795 cri.go:89] found id: ""
	I1213 12:07:27.048391  620795 logs.go:282] 0 containers: []
	W1213 12:07:27.048401  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:27.048410  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:27.048423  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:27.076996  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:27.077029  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:27.149458  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:27.149509  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:27.167444  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:27.167473  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:27.235232  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:27.227331    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.227820    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.229697    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.230220    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.231699    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:27.227331    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.227820    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.229697    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.230220    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.231699    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:27.235258  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:27.235270  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:07:30.537006  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:33.036221  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:29.764538  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:29.791446  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:29.791560  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:29.844876  620795 cri.go:89] found id: ""
	I1213 12:07:29.844953  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.844976  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:29.844996  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:29.845082  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:29.884357  620795 cri.go:89] found id: ""
	I1213 12:07:29.884423  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.884441  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:29.884449  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:29.884508  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:29.914712  620795 cri.go:89] found id: ""
	I1213 12:07:29.914738  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.914748  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:29.914755  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:29.914813  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:29.940420  620795 cri.go:89] found id: ""
	I1213 12:07:29.940500  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.940516  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:29.940524  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:29.940585  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:29.970378  620795 cri.go:89] found id: ""
	I1213 12:07:29.970404  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.970413  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:29.970420  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:29.970478  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:29.996803  620795 cri.go:89] found id: ""
	I1213 12:07:29.996881  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.996898  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:29.996907  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:29.996983  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:30.040874  620795 cri.go:89] found id: ""
	I1213 12:07:30.040904  620795 logs.go:282] 0 containers: []
	W1213 12:07:30.040913  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:30.040920  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:30.040995  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:30.083632  620795 cri.go:89] found id: ""
	I1213 12:07:30.083658  620795 logs.go:282] 0 containers: []
	W1213 12:07:30.083667  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:30.083676  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:30.083689  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:30.149516  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:30.149553  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:30.167731  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:30.167816  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:30.233503  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:30.225039   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.225442   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.227057   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.227805   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.229579   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:30.225039   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.225442   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.227057   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.227805   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.229579   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:30.233567  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:30.233586  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:30.263464  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:30.263497  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:32.796303  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:32.813180  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:32.813263  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:32.849335  620795 cri.go:89] found id: ""
	I1213 12:07:32.849413  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.849456  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:32.849481  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:32.849570  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:32.880068  620795 cri.go:89] found id: ""
	I1213 12:07:32.880092  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.880101  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:32.880107  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:32.880165  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:32.907166  620795 cri.go:89] found id: ""
	I1213 12:07:32.907193  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.907202  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:32.907209  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:32.907266  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:32.933296  620795 cri.go:89] found id: ""
	I1213 12:07:32.933366  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.933388  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:32.933407  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:32.933500  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:32.959040  620795 cri.go:89] found id: ""
	I1213 12:07:32.959106  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.959130  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:32.959149  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:32.959233  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:32.989508  620795 cri.go:89] found id: ""
	I1213 12:07:32.989531  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.989540  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:32.989546  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:32.989629  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:33.018978  620795 cri.go:89] found id: ""
	I1213 12:07:33.019002  620795 logs.go:282] 0 containers: []
	W1213 12:07:33.019010  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:33.019017  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:33.019098  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:33.046327  620795 cri.go:89] found id: ""
	I1213 12:07:33.046359  620795 logs.go:282] 0 containers: []
	W1213 12:07:33.046368  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:33.046378  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:33.046419  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:33.075176  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:33.075213  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:33.107277  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:33.107309  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:33.174349  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:33.174384  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:33.192737  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:33.192770  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:33.259992  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:33.251960   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.252364   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.253955   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.254311   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.255985   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:33.251960   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.252364   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.253955   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.254311   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.255985   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:07:35.037005  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:37.037071  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:35.760267  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:35.771899  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:35.771965  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:35.816451  620795 cri.go:89] found id: ""
	I1213 12:07:35.816499  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.816508  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:35.816519  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:35.816576  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:35.874010  620795 cri.go:89] found id: ""
	I1213 12:07:35.874031  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.874040  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:35.874046  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:35.874109  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:35.901470  620795 cri.go:89] found id: ""
	I1213 12:07:35.901499  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.901509  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:35.901515  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:35.901577  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:35.929967  620795 cri.go:89] found id: ""
	I1213 12:07:35.929988  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.929997  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:35.930004  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:35.930061  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:35.959220  620795 cri.go:89] found id: ""
	I1213 12:07:35.959245  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.959255  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:35.959262  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:35.959323  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:35.988889  620795 cri.go:89] found id: ""
	I1213 12:07:35.988916  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.988925  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:35.988932  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:35.988990  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:36.017868  620795 cri.go:89] found id: ""
	I1213 12:07:36.017896  620795 logs.go:282] 0 containers: []
	W1213 12:07:36.017906  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:36.017912  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:36.017975  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:36.046482  620795 cri.go:89] found id: ""
	I1213 12:07:36.046508  620795 logs.go:282] 0 containers: []
	W1213 12:07:36.046517  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:36.046527  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:36.046539  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:36.063480  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:36.063675  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:36.134374  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:36.125215   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.125817   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.127378   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.127950   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.129158   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:36.125215   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.125817   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.127378   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.127950   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.129158   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:36.134437  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:36.134465  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:36.164786  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:36.164831  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:36.195048  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:36.195077  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:38.762384  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:38.773774  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:38.773860  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:38.823096  620795 cri.go:89] found id: ""
	I1213 12:07:38.823118  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.823127  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:38.823133  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:38.823192  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:38.859735  620795 cri.go:89] found id: ""
	I1213 12:07:38.859758  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.859766  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:38.859773  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:38.859832  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:38.888780  620795 cri.go:89] found id: ""
	I1213 12:07:38.888806  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.888815  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:38.888821  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:38.888885  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:38.918480  620795 cri.go:89] found id: ""
	I1213 12:07:38.918506  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.918516  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:38.918522  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:38.918579  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:38.944442  620795 cri.go:89] found id: ""
	I1213 12:07:38.944475  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.944485  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:38.944492  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:38.944548  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:38.972111  620795 cri.go:89] found id: ""
	I1213 12:07:38.972138  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.972148  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:38.972156  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:38.972217  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:38.999220  620795 cri.go:89] found id: ""
	I1213 12:07:38.999249  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.999259  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:38.999266  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:38.999387  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:39.027462  620795 cri.go:89] found id: ""
	I1213 12:07:39.027489  620795 logs.go:282] 0 containers: []
	W1213 12:07:39.027498  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:39.027508  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:39.027551  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:39.045387  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:39.045421  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:39.113555  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:39.104411   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.105461   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.106402   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.108045   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.108696   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:39.104411   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.105461   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.106402   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.108045   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.108696   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:39.113577  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:39.113591  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:39.141868  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:39.141905  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:39.170660  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:39.170687  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:07:39.536473  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:41.536533  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:41.738914  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:41.749712  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:41.749788  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:41.815733  620795 cri.go:89] found id: ""
	I1213 12:07:41.815757  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.815767  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:41.815774  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:41.815837  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:41.853772  620795 cri.go:89] found id: ""
	I1213 12:07:41.853794  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.853802  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:41.853808  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:41.853864  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:41.880989  620795 cri.go:89] found id: ""
	I1213 12:07:41.881012  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.881021  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:41.881027  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:41.881085  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:41.910432  620795 cri.go:89] found id: ""
	I1213 12:07:41.910455  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.910464  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:41.910470  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:41.910525  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:41.938539  620795 cri.go:89] found id: ""
	I1213 12:07:41.938561  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.938570  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:41.938576  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:41.938636  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:41.964574  620795 cri.go:89] found id: ""
	I1213 12:07:41.964608  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.964617  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:41.964624  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:41.964681  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:41.989355  620795 cri.go:89] found id: ""
	I1213 12:07:41.989380  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.989389  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:41.989396  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:41.989456  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:42.019802  620795 cri.go:89] found id: ""
	I1213 12:07:42.019830  620795 logs.go:282] 0 containers: []
	W1213 12:07:42.019839  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:42.019849  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:42.019861  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:42.052058  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:42.052087  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:42.123300  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:42.123360  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:42.144729  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:42.144768  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:42.227868  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:42.217286   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.218234   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.220463   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.221227   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.223007   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:42.217286   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.218234   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.220463   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.221227   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.223007   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:42.227896  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:42.227910  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:07:44.037002  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:46.037183  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:44.760193  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:44.770916  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:44.770989  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:44.803100  620795 cri.go:89] found id: ""
	I1213 12:07:44.803124  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.803133  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:44.803140  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:44.803195  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:44.851212  620795 cri.go:89] found id: ""
	I1213 12:07:44.851235  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.851244  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:44.851250  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:44.851307  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:44.902052  620795 cri.go:89] found id: ""
	I1213 12:07:44.902075  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.902084  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:44.902090  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:44.902150  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:44.933898  620795 cri.go:89] found id: ""
	I1213 12:07:44.933926  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.933935  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:44.933942  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:44.934026  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:44.963132  620795 cri.go:89] found id: ""
	I1213 12:07:44.963158  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.963167  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:44.963174  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:44.963261  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:44.988132  620795 cri.go:89] found id: ""
	I1213 12:07:44.988163  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.988174  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:44.988181  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:44.988238  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:45.046906  620795 cri.go:89] found id: ""
	I1213 12:07:45.046934  620795 logs.go:282] 0 containers: []
	W1213 12:07:45.046943  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:45.046951  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:45.047019  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:45.080632  620795 cri.go:89] found id: ""
	I1213 12:07:45.080730  620795 logs.go:282] 0 containers: []
	W1213 12:07:45.080752  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:45.080792  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:45.080810  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:45.157685  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:45.157797  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:45.212507  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:45.212574  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:45.292666  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:45.284764   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.285529   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.287091   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.287398   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.288940   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:45.284764   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.285529   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.287091   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.287398   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.288940   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:45.292707  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:45.292720  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:45.321658  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:45.321690  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:47.858977  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:47.870353  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:47.870425  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:47.902849  620795 cri.go:89] found id: ""
	I1213 12:07:47.902874  620795 logs.go:282] 0 containers: []
	W1213 12:07:47.902883  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:47.902890  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:47.902958  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:47.928841  620795 cri.go:89] found id: ""
	I1213 12:07:47.928866  620795 logs.go:282] 0 containers: []
	W1213 12:07:47.928875  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:47.928882  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:47.928943  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:47.954469  620795 cri.go:89] found id: ""
	I1213 12:07:47.954494  620795 logs.go:282] 0 containers: []
	W1213 12:07:47.954503  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:47.954510  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:47.954571  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:47.984225  620795 cri.go:89] found id: ""
	I1213 12:07:47.984248  620795 logs.go:282] 0 containers: []
	W1213 12:07:47.984257  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:47.984263  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:47.984327  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:48.013666  620795 cri.go:89] found id: ""
	I1213 12:07:48.013694  620795 logs.go:282] 0 containers: []
	W1213 12:07:48.013704  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:48.013710  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:48.013776  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:48.043313  620795 cri.go:89] found id: ""
	I1213 12:07:48.043341  620795 logs.go:282] 0 containers: []
	W1213 12:07:48.043351  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:48.043358  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:48.043445  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:48.070641  620795 cri.go:89] found id: ""
	I1213 12:07:48.070669  620795 logs.go:282] 0 containers: []
	W1213 12:07:48.070680  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:48.070687  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:48.070767  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:48.096729  620795 cri.go:89] found id: ""
	I1213 12:07:48.096754  620795 logs.go:282] 0 containers: []
	W1213 12:07:48.096764  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:48.096773  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:48.096785  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:48.129289  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:48.129318  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:48.196743  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:48.196781  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:48.213775  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:48.213802  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:48.282000  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:48.273477   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.274412   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.276291   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.276931   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.278357   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:48.273477   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.274412   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.276291   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.276931   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.278357   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:48.282076  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:48.282104  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:07:48.537001  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:50.537083  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:53.037078  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:50.813946  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:50.834838  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:50.834928  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:50.871307  620795 cri.go:89] found id: ""
	I1213 12:07:50.871329  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.871337  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:50.871343  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:50.871400  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:50.900887  620795 cri.go:89] found id: ""
	I1213 12:07:50.900913  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.900922  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:50.900929  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:50.900987  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:50.926497  620795 cri.go:89] found id: ""
	I1213 12:07:50.926569  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.926606  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:50.926631  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:50.926721  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:50.954230  620795 cri.go:89] found id: ""
	I1213 12:07:50.954256  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.954266  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:50.954273  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:50.954331  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:50.980389  620795 cri.go:89] found id: ""
	I1213 12:07:50.980414  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.980425  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:50.980431  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:50.980490  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:51.007396  620795 cri.go:89] found id: ""
	I1213 12:07:51.007423  620795 logs.go:282] 0 containers: []
	W1213 12:07:51.007433  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:51.007444  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:51.007507  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:51.038515  620795 cri.go:89] found id: ""
	I1213 12:07:51.038540  620795 logs.go:282] 0 containers: []
	W1213 12:07:51.038550  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:51.038556  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:51.038611  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:51.066063  620795 cri.go:89] found id: ""
	I1213 12:07:51.066088  620795 logs.go:282] 0 containers: []
	W1213 12:07:51.066096  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:51.066111  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:51.066122  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:51.131363  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:51.131402  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:51.148223  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:51.148253  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:51.211768  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:51.204250   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.204888   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.206374   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.206860   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.208288   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:51.204250   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.204888   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.206374   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.206860   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.208288   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:51.211791  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:51.211807  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:51.239792  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:51.239825  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:53.772909  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:53.794190  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:53.794255  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:53.863195  620795 cri.go:89] found id: ""
	I1213 12:07:53.863228  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.863239  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:53.863246  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:53.863323  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:53.894744  620795 cri.go:89] found id: ""
	I1213 12:07:53.894812  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.894836  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:53.894855  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:53.894941  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:53.922176  620795 cri.go:89] found id: ""
	I1213 12:07:53.922244  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.922266  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:53.922284  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:53.922371  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:53.948409  620795 cri.go:89] found id: ""
	I1213 12:07:53.948437  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.948446  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:53.948453  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:53.948512  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:53.974142  620795 cri.go:89] found id: ""
	I1213 12:07:53.974222  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.974244  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:53.974263  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:53.974369  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:54.002307  620795 cri.go:89] found id: ""
	I1213 12:07:54.002343  620795 logs.go:282] 0 containers: []
	W1213 12:07:54.002353  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:54.002361  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:54.002440  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:54.030334  620795 cri.go:89] found id: ""
	I1213 12:07:54.030413  620795 logs.go:282] 0 containers: []
	W1213 12:07:54.030438  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:54.030457  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:54.030566  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:54.056614  620795 cri.go:89] found id: ""
	I1213 12:07:54.056697  620795 logs.go:282] 0 containers: []
	W1213 12:07:54.056713  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:54.056724  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:54.056737  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:54.124215  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:54.124253  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:54.141024  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:54.141052  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:54.203423  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:54.195491   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.196247   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.197856   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.198486   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.200023   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:54.195491   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.196247   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.197856   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.198486   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.200023   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:54.203445  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:54.203457  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:54.231323  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:54.231355  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:07:55.037200  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:57.537019  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:56.762827  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:56.786084  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:56.786208  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:56.855486  620795 cri.go:89] found id: ""
	I1213 12:07:56.855531  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.855542  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:56.855549  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:56.855615  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:56.883436  620795 cri.go:89] found id: ""
	I1213 12:07:56.883531  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.883557  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:56.883587  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:56.883648  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:56.908626  620795 cri.go:89] found id: ""
	I1213 12:07:56.908708  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.908739  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:56.908752  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:56.908821  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:56.935174  620795 cri.go:89] found id: ""
	I1213 12:07:56.935201  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.935210  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:56.935217  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:56.935302  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:56.964101  620795 cri.go:89] found id: ""
	I1213 12:07:56.964128  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.964139  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:56.964146  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:56.964232  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:56.989991  620795 cri.go:89] found id: ""
	I1213 12:07:56.990016  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.990025  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:56.990032  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:56.990117  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:57.021908  620795 cri.go:89] found id: ""
	I1213 12:07:57.021934  620795 logs.go:282] 0 containers: []
	W1213 12:07:57.021944  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:57.021952  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:57.022015  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:57.050893  620795 cri.go:89] found id: ""
	I1213 12:07:57.050919  620795 logs.go:282] 0 containers: []
	W1213 12:07:57.050929  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:57.050939  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:57.050958  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:57.114649  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:57.107304   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.107896   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.109344   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.109787   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.111210   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:57.107304   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.107896   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.109344   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.109787   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.111210   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:57.114709  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:57.114743  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:57.142743  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:57.142778  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:57.171088  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:57.171120  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:57.236905  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:57.236948  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:08:00.039297  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:02.536522  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:59.754255  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:59.764877  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:59.764948  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:59.800655  620795 cri.go:89] found id: ""
	I1213 12:07:59.800682  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.800691  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:59.800698  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:59.800757  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:59.844261  620795 cri.go:89] found id: ""
	I1213 12:07:59.844289  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.844299  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:59.844305  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:59.844363  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:59.890278  620795 cri.go:89] found id: ""
	I1213 12:07:59.890303  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.890313  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:59.890319  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:59.890379  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:59.918606  620795 cri.go:89] found id: ""
	I1213 12:07:59.918632  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.918641  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:59.918647  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:59.918703  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:59.947895  620795 cri.go:89] found id: ""
	I1213 12:07:59.947918  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.947928  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:59.947934  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:59.947993  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:59.973045  620795 cri.go:89] found id: ""
	I1213 12:07:59.973073  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.973082  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:59.973089  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:59.973163  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:00.009231  620795 cri.go:89] found id: ""
	I1213 12:08:00.009320  620795 logs.go:282] 0 containers: []
	W1213 12:08:00.009353  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:00.009374  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:00.009507  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:00.119476  620795 cri.go:89] found id: ""
	I1213 12:08:00.119618  620795 logs.go:282] 0 containers: []
	W1213 12:08:00.119644  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:00.119687  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:00.119721  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:00.145226  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:00.145450  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:00.282893  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:00.266048   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.266988   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.274032   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.274509   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.276639   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:00.266048   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.266988   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.274032   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.274509   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.276639   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:00.282923  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:00.282944  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:00.371336  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:00.371439  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:00.430461  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:00.430503  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:03.002113  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:03.014603  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:03.014679  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:03.042673  620795 cri.go:89] found id: ""
	I1213 12:08:03.042701  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.042711  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:03.042718  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:03.042778  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:03.074056  620795 cri.go:89] found id: ""
	I1213 12:08:03.074133  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.074164  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:03.074185  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:03.074301  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:03.101450  620795 cri.go:89] found id: ""
	I1213 12:08:03.101485  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.101495  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:03.101502  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:03.101564  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:03.132013  620795 cri.go:89] found id: ""
	I1213 12:08:03.132042  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.132053  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:03.132060  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:03.132123  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:03.158035  620795 cri.go:89] found id: ""
	I1213 12:08:03.158057  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.158067  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:03.158074  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:03.158131  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:03.183772  620795 cri.go:89] found id: ""
	I1213 12:08:03.183800  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.183809  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:03.183816  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:03.183879  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:03.209685  620795 cri.go:89] found id: ""
	I1213 12:08:03.209710  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.209718  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:03.209725  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:03.209809  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:03.238718  620795 cri.go:89] found id: ""
	I1213 12:08:03.238742  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.238751  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:03.238760  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:03.238771  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:03.266176  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:03.266211  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:03.295327  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:03.295357  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:03.371751  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:03.371796  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:03.388535  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:03.388569  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:03.455075  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:03.446801   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.447400   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.448900   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.449492   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.451125   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:03.446801   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.447400   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.448900   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.449492   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.451125   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:08:05.037001  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:07.037153  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:05.956468  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:05.967247  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:05.967349  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:05.992470  620795 cri.go:89] found id: ""
	I1213 12:08:05.992495  620795 logs.go:282] 0 containers: []
	W1213 12:08:05.992504  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:05.992510  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:05.992576  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:06.025309  620795 cri.go:89] found id: ""
	I1213 12:08:06.025339  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.025349  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:06.025356  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:06.025417  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:06.056164  620795 cri.go:89] found id: ""
	I1213 12:08:06.056192  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.056202  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:06.056208  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:06.056268  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:06.091020  620795 cri.go:89] found id: ""
	I1213 12:08:06.091047  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.091057  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:06.091063  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:06.091124  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:06.117741  620795 cri.go:89] found id: ""
	I1213 12:08:06.117767  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.117776  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:06.117792  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:06.117850  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:06.143430  620795 cri.go:89] found id: ""
	I1213 12:08:06.143454  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.143465  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:06.143472  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:06.143558  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:06.169857  620795 cri.go:89] found id: ""
	I1213 12:08:06.169883  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.169892  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:06.169899  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:06.169959  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:06.196298  620795 cri.go:89] found id: ""
	I1213 12:08:06.196325  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.196335  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:06.196344  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:06.196385  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:06.212572  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:06.212599  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:06.278450  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:06.270268   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.270834   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.272354   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.273016   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.274527   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:06.270268   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.270834   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.272354   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.273016   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.274527   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:06.278473  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:06.278485  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:06.306640  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:06.306679  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:06.336266  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:06.336295  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:08.901791  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:08.912829  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:08.912897  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:08.942435  620795 cri.go:89] found id: ""
	I1213 12:08:08.942467  620795 logs.go:282] 0 containers: []
	W1213 12:08:08.942476  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:08.942483  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:08.942552  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:08.968397  620795 cri.go:89] found id: ""
	I1213 12:08:08.968475  620795 logs.go:282] 0 containers: []
	W1213 12:08:08.968508  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:08.968533  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:08.968615  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:08.995667  620795 cri.go:89] found id: ""
	I1213 12:08:08.995734  620795 logs.go:282] 0 containers: []
	W1213 12:08:08.995757  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:08.995776  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:08.995851  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:09.026748  620795 cri.go:89] found id: ""
	I1213 12:08:09.026827  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.026859  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:09.026878  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:09.026961  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:09.052881  620795 cri.go:89] found id: ""
	I1213 12:08:09.052910  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.052919  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:09.052926  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:09.053016  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:09.079635  620795 cri.go:89] found id: ""
	I1213 12:08:09.079663  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.079673  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:09.079679  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:09.079740  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:09.106465  620795 cri.go:89] found id: ""
	I1213 12:08:09.106499  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.106507  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:09.106529  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:09.106610  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:09.132296  620795 cri.go:89] found id: ""
	I1213 12:08:09.132373  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.132389  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:09.132400  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:09.132411  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:09.198891  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:09.198937  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:09.215689  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:09.215718  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:09.536381  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:11.536495  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:09.283376  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:09.275383   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.276074   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.277779   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.278245   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.279888   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:09.275383   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.276074   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.277779   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.278245   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.279888   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:09.283399  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:09.283412  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:09.311953  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:09.311995  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:11.844673  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:11.854957  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:11.855031  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:11.884334  620795 cri.go:89] found id: ""
	I1213 12:08:11.884361  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.884370  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:11.884377  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:11.884438  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:11.911693  620795 cri.go:89] found id: ""
	I1213 12:08:11.911715  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.911724  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:11.911730  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:11.911785  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:11.939653  620795 cri.go:89] found id: ""
	I1213 12:08:11.939679  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.939688  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:11.939694  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:11.939753  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:11.965596  620795 cri.go:89] found id: ""
	I1213 12:08:11.965622  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.965631  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:11.965639  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:11.965695  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:11.994822  620795 cri.go:89] found id: ""
	I1213 12:08:11.994848  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.994857  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:11.994863  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:11.994921  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:12.027085  620795 cri.go:89] found id: ""
	I1213 12:08:12.027111  620795 logs.go:282] 0 containers: []
	W1213 12:08:12.027119  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:12.027127  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:12.027189  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:12.060592  620795 cri.go:89] found id: ""
	I1213 12:08:12.060621  620795 logs.go:282] 0 containers: []
	W1213 12:08:12.060631  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:12.060637  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:12.060695  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:12.087001  620795 cri.go:89] found id: ""
	I1213 12:08:12.087026  620795 logs.go:282] 0 containers: []
	W1213 12:08:12.087035  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:12.087046  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:12.087057  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:12.154968  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:12.155007  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:12.173266  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:12.173296  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:12.238320  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:12.230047   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.230756   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.232467   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.233052   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.234716   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:12.230047   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.230756   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.232467   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.233052   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.234716   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:12.238342  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:12.238353  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:12.266852  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:12.266886  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:08:14.037082  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:16.537099  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:14.799502  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:14.811316  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:14.811495  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:14.868310  620795 cri.go:89] found id: ""
	I1213 12:08:14.868404  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.868430  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:14.868485  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:14.868662  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:14.910677  620795 cri.go:89] found id: ""
	I1213 12:08:14.910744  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.910766  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:14.910785  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:14.910872  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:14.939727  620795 cri.go:89] found id: ""
	I1213 12:08:14.939767  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.939777  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:14.939783  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:14.939849  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:14.966035  620795 cri.go:89] found id: ""
	I1213 12:08:14.966069  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.966078  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:14.966086  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:14.966160  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:14.994530  620795 cri.go:89] found id: ""
	I1213 12:08:14.994596  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.994619  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:14.994641  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:14.994727  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:15.032176  620795 cri.go:89] found id: ""
	I1213 12:08:15.032213  620795 logs.go:282] 0 containers: []
	W1213 12:08:15.032223  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:15.032230  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:15.032294  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:15.063866  620795 cri.go:89] found id: ""
	I1213 12:08:15.063900  620795 logs.go:282] 0 containers: []
	W1213 12:08:15.063910  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:15.063916  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:15.063977  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:15.094824  620795 cri.go:89] found id: ""
	I1213 12:08:15.094857  620795 logs.go:282] 0 containers: []
	W1213 12:08:15.094867  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:15.094876  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:15.094888  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:15.123857  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:15.123926  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:15.189408  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:15.189444  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:15.208112  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:15.208143  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:15.272770  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:15.265015   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.265421   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.266883   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.267540   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.269262   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:15.265015   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.265421   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.266883   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.267540   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.269262   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:15.272794  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:15.272806  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:17.802242  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:17.818907  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:17.818976  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:17.860553  620795 cri.go:89] found id: ""
	I1213 12:08:17.860577  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.860586  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:17.860594  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:17.860663  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:17.890844  620795 cri.go:89] found id: ""
	I1213 12:08:17.890868  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.890877  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:17.890883  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:17.890937  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:17.916758  620795 cri.go:89] found id: ""
	I1213 12:08:17.916784  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.916794  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:17.916800  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:17.916860  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:17.946527  620795 cri.go:89] found id: ""
	I1213 12:08:17.946564  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.946573  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:17.946598  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:17.946684  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:17.971981  620795 cri.go:89] found id: ""
	I1213 12:08:17.972004  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.972013  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:17.972020  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:17.972075  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:17.997005  620795 cri.go:89] found id: ""
	I1213 12:08:17.997042  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.997052  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:17.997059  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:17.997126  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:18.029007  620795 cri.go:89] found id: ""
	I1213 12:08:18.029038  620795 logs.go:282] 0 containers: []
	W1213 12:08:18.029054  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:18.029061  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:18.029120  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:18.056596  620795 cri.go:89] found id: ""
	I1213 12:08:18.056625  620795 logs.go:282] 0 containers: []
	W1213 12:08:18.056637  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:18.056647  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:18.056661  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:18.074846  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:18.074874  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:18.144092  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:18.136489   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.137142   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.138620   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.139127   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.140582   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:18.136489   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.137142   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.138620   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.139127   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.140582   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:18.144157  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:18.144176  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:18.173096  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:18.173134  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:18.208914  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:18.208943  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:08:19.037143  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:21.537005  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:20.774528  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:20.788572  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:20.788639  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:20.858764  620795 cri.go:89] found id: ""
	I1213 12:08:20.858786  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.858794  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:20.858800  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:20.858857  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:20.887866  620795 cri.go:89] found id: ""
	I1213 12:08:20.887888  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.887897  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:20.887904  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:20.887967  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:20.918367  620795 cri.go:89] found id: ""
	I1213 12:08:20.918438  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.918462  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:20.918481  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:20.918566  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:20.943267  620795 cri.go:89] found id: ""
	I1213 12:08:20.943292  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.943301  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:20.943308  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:20.943362  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:20.972672  620795 cri.go:89] found id: ""
	I1213 12:08:20.972707  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.972716  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:20.972723  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:20.972781  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:20.997368  620795 cri.go:89] found id: ""
	I1213 12:08:20.997394  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.997404  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:20.997411  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:20.997487  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:21.029283  620795 cri.go:89] found id: ""
	I1213 12:08:21.029309  620795 logs.go:282] 0 containers: []
	W1213 12:08:21.029319  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:21.029328  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:21.029382  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:21.054485  620795 cri.go:89] found id: ""
	I1213 12:08:21.054510  620795 logs.go:282] 0 containers: []
	W1213 12:08:21.054520  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:21.054529  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:21.054540  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:21.121036  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:21.121073  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:21.137498  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:21.137526  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:21.201021  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:21.192527   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.193441   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.195064   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.195396   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.196967   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:21.192527   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.193441   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.195064   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.195396   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.196967   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:21.201047  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:21.201060  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:21.233120  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:21.233155  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:23.768528  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:23.784788  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:23.784875  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:23.861902  620795 cri.go:89] found id: ""
	I1213 12:08:23.861933  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.861949  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:23.861956  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:23.862019  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:23.890007  620795 cri.go:89] found id: ""
	I1213 12:08:23.890029  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.890038  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:23.890044  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:23.890104  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:23.915427  620795 cri.go:89] found id: ""
	I1213 12:08:23.915450  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.915459  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:23.915465  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:23.915550  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:23.941041  620795 cri.go:89] found id: ""
	I1213 12:08:23.941069  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.941078  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:23.941085  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:23.941141  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:23.966860  620795 cri.go:89] found id: ""
	I1213 12:08:23.966886  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.966895  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:23.966902  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:23.966958  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:23.992499  620795 cri.go:89] found id: ""
	I1213 12:08:23.992528  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.992537  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:23.992558  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:23.992616  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:24.019996  620795 cri.go:89] found id: ""
	I1213 12:08:24.020030  620795 logs.go:282] 0 containers: []
	W1213 12:08:24.020045  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:24.020052  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:24.020129  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:24.047181  620795 cri.go:89] found id: ""
	I1213 12:08:24.047216  620795 logs.go:282] 0 containers: []
	W1213 12:08:24.047225  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:24.047234  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:24.047245  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:24.110372  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:24.102615   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.103224   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.104739   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.105164   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.106663   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:24.102615   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.103224   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.104739   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.105164   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.106663   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:24.110398  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:24.110412  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:24.139714  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:24.139748  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:24.172397  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:24.172426  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:08:24.037139  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:26.537138  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:24.240938  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:24.240975  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:26.757922  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:26.771140  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:26.771256  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:26.808049  620795 cri.go:89] found id: ""
	I1213 12:08:26.808124  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.808149  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:26.808169  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:26.808258  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:26.845750  620795 cri.go:89] found id: ""
	I1213 12:08:26.845826  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.845851  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:26.845870  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:26.845951  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:26.885327  620795 cri.go:89] found id: ""
	I1213 12:08:26.885401  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.885424  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:26.885444  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:26.885533  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:26.912813  620795 cri.go:89] found id: ""
	I1213 12:08:26.912844  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.912853  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:26.912860  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:26.912917  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:26.940224  620795 cri.go:89] found id: ""
	I1213 12:08:26.940301  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.940317  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:26.940325  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:26.940383  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:26.970684  620795 cri.go:89] found id: ""
	I1213 12:08:26.970728  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.970738  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:26.970745  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:26.970825  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:27.001739  620795 cri.go:89] found id: ""
	I1213 12:08:27.001821  620795 logs.go:282] 0 containers: []
	W1213 12:08:27.001846  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:27.001867  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:27.001968  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:27.029502  620795 cri.go:89] found id: ""
	I1213 12:08:27.029525  620795 logs.go:282] 0 containers: []
	W1213 12:08:27.029533  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:27.029542  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:27.029561  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:27.097411  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:27.090200   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.090583   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.092154   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.092579   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.093994   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:27.090200   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.090583   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.092154   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.092579   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.093994   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:27.097433  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:27.097445  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:27.126207  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:27.126242  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:27.152776  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:27.152814  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:27.218430  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:27.218466  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:08:29.036447  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:31.536317  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:29.735087  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:29.746276  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:29.746353  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:29.790488  620795 cri.go:89] found id: ""
	I1213 12:08:29.790563  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.790587  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:29.790607  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:29.790694  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:29.863661  620795 cri.go:89] found id: ""
	I1213 12:08:29.863730  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.863747  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:29.863754  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:29.863822  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:29.889696  620795 cri.go:89] found id: ""
	I1213 12:08:29.889723  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.889731  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:29.889738  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:29.889793  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:29.917557  620795 cri.go:89] found id: ""
	I1213 12:08:29.917619  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.917642  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:29.917657  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:29.917732  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:29.941179  620795 cri.go:89] found id: ""
	I1213 12:08:29.941201  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.941210  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:29.941217  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:29.941276  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:29.965683  620795 cri.go:89] found id: ""
	I1213 12:08:29.965758  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.965775  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:29.965783  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:29.965858  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:29.994076  620795 cri.go:89] found id: ""
	I1213 12:08:29.994111  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.994121  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:29.994127  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:29.994189  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:30.034696  620795 cri.go:89] found id: ""
	I1213 12:08:30.034723  620795 logs.go:282] 0 containers: []
	W1213 12:08:30.034733  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:30.034743  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:30.034756  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:30.103277  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:30.103319  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:30.120811  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:30.120901  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:30.194375  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:30.185897   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.186387   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.187817   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.188577   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.190599   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:30.185897   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.186387   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.187817   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.188577   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.190599   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:30.194399  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:30.194412  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:30.225794  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:30.225830  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:32.757391  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:32.768065  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:32.768178  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:32.801083  620795 cri.go:89] found id: ""
	I1213 12:08:32.801105  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.801114  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:32.801123  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:32.801179  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:32.839546  620795 cri.go:89] found id: ""
	I1213 12:08:32.839567  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.839576  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:32.839582  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:32.839637  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:32.888939  620795 cri.go:89] found id: ""
	I1213 12:08:32.889005  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.889029  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:32.889044  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:32.889115  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:32.926624  620795 cri.go:89] found id: ""
	I1213 12:08:32.926651  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.926666  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:32.926676  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:32.926752  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:32.958800  620795 cri.go:89] found id: ""
	I1213 12:08:32.958835  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.958844  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:32.958850  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:32.958916  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:32.989617  620795 cri.go:89] found id: ""
	I1213 12:08:32.989692  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.989708  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:32.989721  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:32.989791  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:33.017551  620795 cri.go:89] found id: ""
	I1213 12:08:33.017623  620795 logs.go:282] 0 containers: []
	W1213 12:08:33.017647  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:33.017659  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:33.017736  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:33.043587  620795 cri.go:89] found id: ""
	I1213 12:08:33.043612  620795 logs.go:282] 0 containers: []
	W1213 12:08:33.043621  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:33.043632  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:33.043644  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:33.114830  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:33.105828   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.106521   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.108296   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.108871   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.110537   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:33.105828   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.106521   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.108296   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.108871   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.110537   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:33.114904  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:33.114923  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:33.144060  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:33.144098  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:33.174527  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:33.174559  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:33.242589  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:33.242622  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:08:33.536995  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:35.537098  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:38.037111  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:35.760100  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:35.770376  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:35.770444  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:35.803335  620795 cri.go:89] found id: ""
	I1213 12:08:35.803356  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.803365  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:35.803371  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:35.803427  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:35.837892  620795 cri.go:89] found id: ""
	I1213 12:08:35.837916  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.837926  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:35.837933  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:35.837989  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:35.866561  620795 cri.go:89] found id: ""
	I1213 12:08:35.866588  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.866598  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:35.866605  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:35.866667  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:35.892759  620795 cri.go:89] found id: ""
	I1213 12:08:35.892795  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.892804  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:35.892810  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:35.892880  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:35.923215  620795 cri.go:89] found id: ""
	I1213 12:08:35.923238  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.923247  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:35.923252  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:35.923310  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:35.950448  620795 cri.go:89] found id: ""
	I1213 12:08:35.950475  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.950484  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:35.950491  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:35.950546  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:35.976121  620795 cri.go:89] found id: ""
	I1213 12:08:35.976149  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.976158  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:35.976165  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:35.976247  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:36.007726  620795 cri.go:89] found id: ""
	I1213 12:08:36.007754  620795 logs.go:282] 0 containers: []
	W1213 12:08:36.007765  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:36.007774  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:36.007789  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:36.085423  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:36.085465  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:36.104590  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:36.104621  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:36.174734  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:36.166755   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.167389   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.169214   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.169622   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.171073   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:36.166755   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.167389   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.169214   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.169622   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.171073   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:36.174757  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:36.174771  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:36.204232  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:36.204271  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:38.733384  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:38.744052  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:38.744118  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:38.780661  620795 cri.go:89] found id: ""
	I1213 12:08:38.780685  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.780694  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:38.780704  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:38.780764  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:38.822383  620795 cri.go:89] found id: ""
	I1213 12:08:38.822407  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.822416  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:38.822422  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:38.822477  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:38.855498  620795 cri.go:89] found id: ""
	I1213 12:08:38.855544  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.855553  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:38.855565  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:38.855619  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:38.885018  620795 cri.go:89] found id: ""
	I1213 12:08:38.885045  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.885055  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:38.885062  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:38.885119  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:38.910126  620795 cri.go:89] found id: ""
	I1213 12:08:38.910162  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.910172  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:38.910179  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:38.910246  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:38.940467  620795 cri.go:89] found id: ""
	I1213 12:08:38.940502  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.940513  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:38.940520  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:38.940597  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:38.966188  620795 cri.go:89] found id: ""
	I1213 12:08:38.966222  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.966232  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:38.966238  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:38.966303  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:38.995881  620795 cri.go:89] found id: ""
	I1213 12:08:38.995907  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.995917  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:38.995927  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:38.995939  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:39.015887  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:39.015917  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:39.098130  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:39.090344   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.090891   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.092783   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.093197   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.094699   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:39.090344   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.090891   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.092783   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.093197   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.094699   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:39.098150  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:39.098163  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:39.126236  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:39.126269  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:39.153815  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:39.153842  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:08:40.037886  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:42.536996  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:41.721729  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:41.732158  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:41.732229  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:41.760995  620795 cri.go:89] found id: ""
	I1213 12:08:41.761017  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.761026  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:41.761033  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:41.761087  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:41.795082  620795 cri.go:89] found id: ""
	I1213 12:08:41.795105  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.795113  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:41.795119  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:41.795184  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:41.825959  620795 cri.go:89] found id: ""
	I1213 12:08:41.826033  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.826056  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:41.826076  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:41.826159  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:41.852118  620795 cri.go:89] found id: ""
	I1213 12:08:41.852183  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.852198  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:41.852205  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:41.852261  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:41.877587  620795 cri.go:89] found id: ""
	I1213 12:08:41.877626  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.877636  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:41.877642  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:41.877706  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:41.906166  620795 cri.go:89] found id: ""
	I1213 12:08:41.906192  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.906202  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:41.906216  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:41.906273  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:41.935663  620795 cri.go:89] found id: ""
	I1213 12:08:41.935688  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.935697  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:41.935704  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:41.935761  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:41.960919  620795 cri.go:89] found id: ""
	I1213 12:08:41.960943  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.960952  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:41.960960  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:41.960971  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:41.989438  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:41.989472  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:42.026694  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:42.026779  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:42.120242  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:42.120297  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:42.141212  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:42.141246  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:42.216949  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:42.207789   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.208642   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.210144   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.210924   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.212786   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:42.207789   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.208642   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.210144   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.210924   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.212786   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:08:44.537110  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:47.036204  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:44.717236  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:44.728891  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:44.728977  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:44.753976  620795 cri.go:89] found id: ""
	I1213 12:08:44.754000  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.754008  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:44.754018  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:44.754078  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:44.786705  620795 cri.go:89] found id: ""
	I1213 12:08:44.786732  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.786741  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:44.786748  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:44.786806  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:44.822299  620795 cri.go:89] found id: ""
	I1213 12:08:44.822328  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.822337  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:44.822345  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:44.822401  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:44.856823  620795 cri.go:89] found id: ""
	I1213 12:08:44.856856  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.856867  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:44.856873  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:44.856930  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:44.882589  620795 cri.go:89] found id: ""
	I1213 12:08:44.882614  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.882623  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:44.882630  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:44.882688  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:44.908466  620795 cri.go:89] found id: ""
	I1213 12:08:44.908491  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.908500  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:44.908507  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:44.908588  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:44.937829  620795 cri.go:89] found id: ""
	I1213 12:08:44.937856  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.937865  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:44.937872  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:44.937927  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:44.963281  620795 cri.go:89] found id: ""
	I1213 12:08:44.963305  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.963315  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:44.963324  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:44.963335  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:44.991410  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:44.991446  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:45.037106  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:45.037139  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:45.136316  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:45.136362  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:45.159600  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:45.159635  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:45.275736  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:45.264960   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.265716   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.268688   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.269926   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.271240   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:45.264960   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.265716   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.268688   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.269926   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.271240   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:47.775978  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:47.794424  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:47.794535  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:47.822730  620795 cri.go:89] found id: ""
	I1213 12:08:47.822773  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.822782  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:47.822794  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:47.822874  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:47.855882  620795 cri.go:89] found id: ""
	I1213 12:08:47.855909  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.855921  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:47.855928  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:47.855992  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:47.880824  620795 cri.go:89] found id: ""
	I1213 12:08:47.880849  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.880863  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:47.880870  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:47.880944  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:47.905536  620795 cri.go:89] found id: ""
	I1213 12:08:47.905558  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.905567  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:47.905573  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:47.905627  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:47.930629  620795 cri.go:89] found id: ""
	I1213 12:08:47.930651  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.930660  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:47.930666  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:47.930722  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:47.963310  620795 cri.go:89] found id: ""
	I1213 12:08:47.963340  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.963348  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:47.963355  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:47.963416  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:47.988259  620795 cri.go:89] found id: ""
	I1213 12:08:47.988284  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.988293  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:47.988300  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:47.988363  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:48.016297  620795 cri.go:89] found id: ""
	I1213 12:08:48.016324  620795 logs.go:282] 0 containers: []
	W1213 12:08:48.016334  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:48.016344  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:48.016358  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:48.036992  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:48.037157  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:48.110165  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:48.102261   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.102875   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.104540   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.105094   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.106601   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:48.102261   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.102875   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.104540   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.105094   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.106601   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:48.110186  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:48.110199  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:48.138855  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:48.138892  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:48.167128  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:48.167162  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:08:49.537098  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:52.036223  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:50.735817  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:50.746548  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:50.746616  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:50.775549  620795 cri.go:89] found id: ""
	I1213 12:08:50.775575  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.775585  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:50.775591  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:50.775646  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:50.804612  620795 cri.go:89] found id: ""
	I1213 12:08:50.804635  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.804644  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:50.804650  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:50.804705  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:50.837625  620795 cri.go:89] found id: ""
	I1213 12:08:50.837650  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.837659  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:50.837665  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:50.837720  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:50.864589  620795 cri.go:89] found id: ""
	I1213 12:08:50.864612  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.864620  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:50.864627  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:50.864687  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:50.889551  620795 cri.go:89] found id: ""
	I1213 12:08:50.889575  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.889583  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:50.889589  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:50.889646  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:50.919224  620795 cri.go:89] found id: ""
	I1213 12:08:50.919247  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.919255  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:50.919261  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:50.919317  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:50.944422  620795 cri.go:89] found id: ""
	I1213 12:08:50.944495  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.944574  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:50.944612  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:50.944696  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:50.970021  620795 cri.go:89] found id: ""
	I1213 12:08:50.970086  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.970109  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:50.970132  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:50.970163  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:50.986872  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:50.986906  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:51.060506  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:51.052011   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.052816   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.054613   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.055181   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.056812   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:51.052011   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.052816   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.054613   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.055181   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.056812   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:51.060540  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:51.060552  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:51.092480  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:51.092521  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:51.123102  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:51.123131  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:53.694152  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:53.705704  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:53.705773  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:53.731245  620795 cri.go:89] found id: ""
	I1213 12:08:53.731268  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.731276  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:53.731282  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:53.731340  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:53.757925  620795 cri.go:89] found id: ""
	I1213 12:08:53.757957  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.757966  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:53.757973  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:53.758036  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:53.808536  620795 cri.go:89] found id: ""
	I1213 12:08:53.808559  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.808568  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:53.808575  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:53.808635  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:53.840078  620795 cri.go:89] found id: ""
	I1213 12:08:53.840112  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.840122  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:53.840129  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:53.840189  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:53.865894  620795 cri.go:89] found id: ""
	I1213 12:08:53.865917  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.865927  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:53.865933  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:53.865993  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:53.891498  620795 cri.go:89] found id: ""
	I1213 12:08:53.891542  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.891551  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:53.891558  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:53.891621  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:53.917936  620795 cri.go:89] found id: ""
	I1213 12:08:53.917959  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.917968  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:53.917974  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:53.918032  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:53.943098  620795 cri.go:89] found id: ""
	I1213 12:08:53.943169  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.943193  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:53.943215  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:53.943252  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:53.971597  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:53.971637  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:54.002508  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:54.002540  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:54.080813  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:54.080899  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:54.109629  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:54.109659  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:54.177694  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:54.170109   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.170817   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.172367   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.172694   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.174239   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:54.170109   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.170817   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.172367   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.172694   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.174239   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:08:54.036977  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:56.537074  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:56.677966  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:56.688667  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:56.688741  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:56.713668  620795 cri.go:89] found id: ""
	I1213 12:08:56.713690  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.713699  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:56.713706  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:56.713762  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:56.741202  620795 cri.go:89] found id: ""
	I1213 12:08:56.741227  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.741236  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:56.741242  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:56.741339  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:56.768922  620795 cri.go:89] found id: ""
	I1213 12:08:56.768942  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.768950  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:56.768957  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:56.769013  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:56.797125  620795 cri.go:89] found id: ""
	I1213 12:08:56.797148  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.797157  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:56.797164  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:56.797218  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:56.824672  620795 cri.go:89] found id: ""
	I1213 12:08:56.824695  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.824703  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:56.824709  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:56.824763  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:56.849420  620795 cri.go:89] found id: ""
	I1213 12:08:56.849446  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.849455  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:56.849462  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:56.849516  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:56.875118  620795 cri.go:89] found id: ""
	I1213 12:08:56.875143  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.875152  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:56.875158  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:56.875213  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:56.900386  620795 cri.go:89] found id: ""
	I1213 12:08:56.900411  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.900420  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:56.900434  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:56.900446  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:56.966130  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:56.966167  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:56.982745  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:56.982773  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:57.073125  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:57.063683   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.064467   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.066129   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.066624   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.068003   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:57.063683   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.064467   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.066129   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.066624   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.068003   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:57.073146  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:57.073165  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:57.104552  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:57.104585  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:59.636110  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:59.649509  620795 out.go:203] 
	W1213 12:08:59.652376  620795 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 12:08:59.652409  620795 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 12:08:59.652418  620795 out.go:285] * Related issues:
	W1213 12:08:59.652431  620795 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 12:08:59.652444  620795 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 12:08:59.655226  620795 out.go:203] 
	W1213 12:08:59.037102  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:09:01.536950  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:09:03.536998  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:09:06.036283  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:09:08.536173  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:09:10.536219  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:09:11.536756  622913 node_ready.go:38] duration metric: took 6m0.001029523s for node "no-preload-307409" to be "Ready" ...
	I1213 12:09:11.540138  622913 out.go:203] 
	W1213 12:09:11.543197  622913 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 12:09:11.543231  622913 out.go:285] * 
	W1213 12:09:11.545584  622913 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 12:09:11.548648  622913 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.522775611Z" level=info msg="Using the internal default seccomp profile"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.522787123Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.522793498Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.522799176Z" level=info msg="RDT not available in the host system"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.522824374Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.523715533Z" level=info msg="Conmon does support the --sync option"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.523753753Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.523772756Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.524439847Z" level=info msg="Conmon does support the --sync option"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.524461181Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.52461968Z" level=info msg="Updated default CNI network name to "
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.525403671Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci
/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_m
emory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_di
r = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [cr
io.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.529256513Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.529355665Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576580003Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576617025Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576659233Z" level=info msg="Create NRI interface"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576753674Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576762569Z" level=info msg="runtime interface created"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576773646Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576779767Z" level=info msg="runtime interface starting up..."
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576785831Z" level=info msg="starting plugins..."
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576798393Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576863575Z" level=info msg="No systemd watchdog enabled"
	Dec 13 12:03:09 no-preload-307409 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:09:15.289807    4055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:09:15.290518    4055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:09:15.292082    4055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:09:15.292753    4055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:09:15.294278    4055 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	[Dec13 11:46] overlayfs: idmapped layers are currently not supported
	[ +24.639766] overlayfs: idmapped layers are currently not supported
	[ +18.732422] overlayfs: idmapped layers are currently not supported
	[Dec13 11:47] overlayfs: idmapped layers are currently not supported
	[Dec13 11:48] overlayfs: idmapped layers are currently not supported
	[Dec13 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.618483] overlayfs: idmapped layers are currently not supported
	[Dec13 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.749488] overlayfs: idmapped layers are currently not supported
	[Dec13 11:52] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 12:09:15 up  3:51,  0 user,  load average: 1.08, 0.86, 1.24
	Linux no-preload-307409 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 12:09:12 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:09:13 no-preload-307409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 13 12:09:13 no-preload-307409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:13 no-preload-307409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:13 no-preload-307409 kubelet[3953]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:13 no-preload-307409 kubelet[3953]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:13 no-preload-307409 kubelet[3953]: E1213 12:09:13.381652    3953 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:09:13 no-preload-307409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:09:13 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:09:14 no-preload-307409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 13 12:09:14 no-preload-307409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:14 no-preload-307409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:14 no-preload-307409 kubelet[3959]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:14 no-preload-307409 kubelet[3959]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:14 no-preload-307409 kubelet[3959]: E1213 12:09:14.121161    3959 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:09:14 no-preload-307409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:09:14 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:09:14 no-preload-307409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 13 12:09:14 no-preload-307409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:14 no-preload-307409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:14 no-preload-307409 kubelet[3971]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:14 no-preload-307409 kubelet[3971]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:14 no-preload-307409 kubelet[3971]: E1213 12:09:14.851528    3971 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:09:14 no-preload-307409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:09:14 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-307409 -n no-preload-307409
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-307409 -n no-preload-307409: exit status 2 (600.867397ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-307409" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (375.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (13.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-800979 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-800979 -n newest-cni-800979
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-800979 -n newest-cni-800979: exit status 2 (316.979038ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-800979 -n newest-cni-800979
E1213 12:09:06.640424  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-800979 -n newest-cni-800979: exit status 2 (325.366872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-800979 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-800979 -n newest-cni-800979
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-800979 -n newest-cni-800979: exit status 2 (343.569172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-800979 -n newest-cni-800979
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-800979 -n newest-cni-800979: exit status 2 (305.186315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-800979
helpers_test.go:244: (dbg) docker inspect newest-cni-800979:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef",
	        "Created": "2025-12-13T11:52:51.619651061Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 620923,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T12:02:49.509239436Z",
	            "FinishedAt": "2025-12-13T12:02:48.165379431Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef/hosts",
	        "LogPath": "/var/lib/docker/containers/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef-json.log",
	        "Name": "/newest-cni-800979",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-800979:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-800979",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef",
	                "LowerDir": "/var/lib/docker/overlay2/c7d2cc87bdf8f5a9a60e544f17bca9528f6384a57e9d470177b306242d8113d5-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7d2cc87bdf8f5a9a60e544f17bca9528f6384a57e9d470177b306242d8113d5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7d2cc87bdf8f5a9a60e544f17bca9528f6384a57e9d470177b306242d8113d5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7d2cc87bdf8f5a9a60e544f17bca9528f6384a57e9d470177b306242d8113d5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-800979",
	                "Source": "/var/lib/docker/volumes/newest-cni-800979/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-800979",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-800979",
	                "name.minikube.sigs.k8s.io": "newest-cni-800979",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "24ac9a215b72ee124284f478ff764304afc09b82226a2739c7b5f0f9a84a05cd",
	            "SandboxKey": "/var/run/docker/netns/24ac9a215b72",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-800979": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:2e:cf:d5:d1:e9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de59fc08c8081b0c37df8bacf82db2ccccb307596588e9c22d7d094938935e3c",
	                    "EndpointID": "4aeedc678fe23c218965caf6e08605f8464cbaa26208ec7a8c460ea48b3e8143",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-800979",
	                        "4aef671a766b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-800979 -n newest-cni-800979
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-800979 -n newest-cni-800979: exit status 2 (369.621766ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-800979 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-800979 logs -n 25: (1.711591304s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ stop    │ -p embed-certs-326948 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-326948 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:52 UTC │
	│ image   │ default-k8s-diff-port-151605 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p default-k8s-diff-port-151605 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p disable-driver-mounts-072590                                                                                                                                                                                                                      │ disable-driver-mounts-072590 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ image   │ embed-certs-326948 image list --format=json                                                                                                                                                                                                          │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p embed-certs-326948 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p embed-certs-326948                                                                                                                                                                                                                                │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p embed-certs-326948                                                                                                                                                                                                                                │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p newest-cni-800979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-307409 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:00 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-800979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │                     │
	│ stop    │ -p newest-cni-800979 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:02 UTC │ 13 Dec 25 12:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-800979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:02 UTC │ 13 Dec 25 12:02 UTC │
	│ start   │ -p newest-cni-800979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:02 UTC │                     │
	│ stop    │ -p no-preload-307409 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:03 UTC │ 13 Dec 25 12:03 UTC │
	│ addons  │ enable dashboard -p no-preload-307409 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:03 UTC │ 13 Dec 25 12:03 UTC │
	│ start   │ -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:03 UTC │                     │
	│ image   │ newest-cni-800979 image list --format=json                                                                                                                                                                                                           │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:09 UTC │ 13 Dec 25 12:09 UTC │
	│ pause   │ -p newest-cni-800979 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:09 UTC │ 13 Dec 25 12:09 UTC │
	│ unpause │ -p newest-cni-800979 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:09 UTC │ 13 Dec 25 12:09 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 12:03:03
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 12:03:03.050063  622913 out.go:360] Setting OutFile to fd 1 ...
	I1213 12:03:03.050285  622913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:03:03.050312  622913 out.go:374] Setting ErrFile to fd 2...
	I1213 12:03:03.050330  622913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:03:03.050625  622913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 12:03:03.051085  622913 out.go:368] Setting JSON to false
	I1213 12:03:03.052120  622913 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13535,"bootTime":1765613848,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 12:03:03.052229  622913 start.go:143] virtualization:  
	I1213 12:03:03.055383  622913 out.go:179] * [no-preload-307409] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 12:03:03.059239  622913 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 12:03:03.059332  622913 notify.go:221] Checking for updates...
	I1213 12:03:03.064728  622913 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 12:03:03.067859  622913 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:03:03.070706  622913 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 12:03:03.073576  622913 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 12:03:03.076392  622913 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 12:03:03.079655  622913 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:03:03.080246  622913 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 12:03:03.113231  622913 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 12:03:03.113356  622913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:03:03.174414  622913 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 12:03:03.164880125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:03:03.174536  622913 docker.go:319] overlay module found
	I1213 12:03:03.177638  622913 out.go:179] * Using the docker driver based on existing profile
	I1213 12:03:03.180320  622913 start.go:309] selected driver: docker
	I1213 12:03:03.180343  622913 start.go:927] validating driver "docker" against &{Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:03:03.180449  622913 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 12:03:03.181174  622913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:03:03.236517  622913 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 12:03:03.227319129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:03:03.236860  622913 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 12:03:03.236895  622913 cni.go:84] Creating CNI manager for ""
	I1213 12:03:03.236967  622913 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 12:03:03.237012  622913 start.go:353] cluster config:
	{Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:03:03.241932  622913 out.go:179] * Starting "no-preload-307409" primary control-plane node in "no-preload-307409" cluster
	I1213 12:03:03.244777  622913 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 12:03:03.247722  622913 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 12:03:03.250567  622913 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 12:03:03.250698  622913 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 12:03:03.250725  622913 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/config.json ...
	I1213 12:03:03.251056  622913 cache.go:107] acquiring lock: {Name:mkf4d74369c8245ecb55fb0e29b8225ca9f09ff5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251142  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 12:03:03.251161  622913 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 117.655µs
	I1213 12:03:03.251175  622913 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 12:03:03.251192  622913 cache.go:107] acquiring lock: {Name:mkb6b336872403a4d868a5d769900fdf1066c1c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251240  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1213 12:03:03.251249  622913 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 59.291µs
	I1213 12:03:03.251256  622913 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 12:03:03.251279  622913 cache.go:107] acquiring lock: {Name:mkafdfd911f389f1e02c51849a66241927a5c213 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251318  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1213 12:03:03.251329  622913 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 50.749µs
	I1213 12:03:03.251341  622913 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 12:03:03.251360  622913 cache.go:107] acquiring lock: {Name:mk8f79409d2ca53ad062fcf0126f6980a6193bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251395  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1213 12:03:03.251406  622913 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 49.043µs
	I1213 12:03:03.251413  622913 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 12:03:03.251422  622913 cache.go:107] acquiring lock: {Name:mk2037397f0606151b65f1037a4650bdb91f57be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251455  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1213 12:03:03.251465  622913 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 43.717µs
	I1213 12:03:03.251472  622913 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1213 12:03:03.251481  622913 cache.go:107] acquiring lock: {Name:mkcce925699bd9689e329c60f570e109b24fe773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251564  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1213 12:03:03.251578  622913 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 97.437µs
	I1213 12:03:03.251585  622913 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1213 12:03:03.251596  622913 cache.go:107] acquiring lock: {Name:mk7409e8a480c483310652cd8f23d5f9940a03a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251632  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1213 12:03:03.251642  622913 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 47.82µs
	I1213 12:03:03.251649  622913 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1213 12:03:03.251673  622913 cache.go:107] acquiring lock: {Name:mk4ff965cf9ab0943f63cb9d5079b89d443629ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251707  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1213 12:03:03.251716  622913 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 48.632µs
	I1213 12:03:03.251723  622913 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1213 12:03:03.251729  622913 cache.go:87] Successfully saved all images to host disk.
	I1213 12:03:03.282338  622913 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 12:03:03.282369  622913 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 12:03:03.282443  622913 cache.go:243] Successfully downloaded all kic artifacts
	I1213 12:03:03.282477  622913 start.go:360] acquireMachinesLock for no-preload-307409: {Name:mk5b591d9d6f446a65ecf56605831e84fbfd4c88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.282544  622913 start.go:364] duration metric: took 41.937µs to acquireMachinesLock for "no-preload-307409"
	I1213 12:03:03.282565  622913 start.go:96] Skipping create...Using existing machine configuration
	I1213 12:03:03.282570  622913 fix.go:54] fixHost starting: 
	I1213 12:03:03.282851  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:03.304419  622913 fix.go:112] recreateIfNeeded on no-preload-307409: state=Stopped err=<nil>
	W1213 12:03:03.304448  622913 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 12:02:59.273796  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:02:59.310724  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:02:59.374429  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.374460  620795 retry.go:31] will retry after 1.123869523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.660188  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:02:59.746796  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.746834  620795 retry.go:31] will retry after 827.424249ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.773951  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:02:59.886643  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:02:59.984018  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.984054  620795 retry.go:31] will retry after 1.031600228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:00.289311  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:00.498512  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:00.574703  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:00.609412  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:00.609443  620795 retry.go:31] will retry after 1.594897337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:00.654022  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:00.654055  620795 retry.go:31] will retry after 1.847551508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:00.773391  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:01.016343  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:01.149191  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:01.149241  620795 retry.go:31] will retry after 1.156400239s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:01.273296  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:01.773106  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:02.204552  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:02.273738  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:02.274099  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.274136  620795 retry.go:31] will retry after 1.092655081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.305854  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:02.368964  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.369001  620795 retry.go:31] will retry after 1.680740365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.502311  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:02.587589  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.587627  620795 retry.go:31] will retry after 1.930642019s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.773890  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:03.281133  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:03.367295  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:03.462797  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:03.462834  620795 retry.go:31] will retry after 1.480584037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:03.773095  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:04.050289  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:04.211663  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:04.211692  620795 retry.go:31] will retry after 4.628682765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:03.307872  622913 out.go:252] * Restarting existing docker container for "no-preload-307409" ...
	I1213 12:03:03.307964  622913 cli_runner.go:164] Run: docker start no-preload-307409
	I1213 12:03:03.599368  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:03.618935  622913 kic.go:430] container "no-preload-307409" state is running.
	I1213 12:03:03.619319  622913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 12:03:03.641333  622913 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/config.json ...
	I1213 12:03:03.641563  622913 machine.go:94] provisionDockerMachine start ...
	I1213 12:03:03.641633  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:03.663338  622913 main.go:143] libmachine: Using SSH client type: native
	I1213 12:03:03.663870  622913 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1213 12:03:03.663890  622913 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 12:03:03.664580  622913 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 12:03:06.819092  622913 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-307409
	
	I1213 12:03:06.819117  622913 ubuntu.go:182] provisioning hostname "no-preload-307409"
	I1213 12:03:06.819201  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:06.837856  622913 main.go:143] libmachine: Using SSH client type: native
	I1213 12:03:06.838181  622913 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1213 12:03:06.838198  622913 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-307409 && echo "no-preload-307409" | sudo tee /etc/hostname
	I1213 12:03:06.997122  622913 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-307409
	
	I1213 12:03:06.997203  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:07.016669  622913 main.go:143] libmachine: Using SSH client type: native
	I1213 12:03:07.017014  622913 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1213 12:03:07.017037  622913 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-307409' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-307409/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-307409' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 12:03:07.176125  622913 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 12:03:07.176151  622913 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 12:03:07.176182  622913 ubuntu.go:190] setting up certificates
	I1213 12:03:07.176201  622913 provision.go:84] configureAuth start
	I1213 12:03:07.176265  622913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 12:03:07.193873  622913 provision.go:143] copyHostCerts
	I1213 12:03:07.193961  622913 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 12:03:07.193973  622913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 12:03:07.194049  622913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 12:03:07.194164  622913 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 12:03:07.194175  622913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 12:03:07.194205  622913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 12:03:07.194267  622913 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 12:03:07.194275  622913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 12:03:07.194298  622913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 12:03:07.194346  622913 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.no-preload-307409 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-307409]
	I1213 12:03:07.397856  622913 provision.go:177] copyRemoteCerts
	I1213 12:03:07.397930  622913 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 12:03:07.397969  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:07.415003  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:07.523762  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 12:03:07.541934  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 12:03:07.560353  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 12:03:07.577524  622913 provision.go:87] duration metric: took 401.305633ms to configureAuth
	I1213 12:03:07.577567  622913 ubuntu.go:206] setting minikube options for container-runtime
	I1213 12:03:07.577753  622913 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:03:07.577860  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:07.595178  622913 main.go:143] libmachine: Using SSH client type: native
	I1213 12:03:07.595492  622913 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1213 12:03:07.595506  622913 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 12:03:07.957883  622913 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 12:03:07.957909  622913 machine.go:97] duration metric: took 4.316335928s to provisionDockerMachine
	I1213 12:03:07.957921  622913 start.go:293] postStartSetup for "no-preload-307409" (driver="docker")
	I1213 12:03:07.957933  622913 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 12:03:07.958002  622913 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 12:03:07.958068  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:07.976949  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:04.273235  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:04.518978  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:04.583937  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:04.583972  620795 retry.go:31] will retry after 4.359648713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:04.773380  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:04.944170  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:05.011259  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:05.011298  620795 retry.go:31] will retry after 2.730254551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:05.273717  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:05.773164  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:06.274023  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:06.773331  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:07.273766  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:07.742621  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:07.773999  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:07.885064  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:07.885095  620795 retry.go:31] will retry after 5.399825259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:08.273766  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:08.773645  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:08.841141  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:08.935930  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:08.935967  620795 retry.go:31] will retry after 8.567303782s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:08.944298  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:09.032112  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:09.032154  620795 retry.go:31] will retry after 7.715566724s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:08.088342  622913 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 12:03:08.091929  622913 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 12:03:08.092010  622913 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 12:03:08.092029  622913 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 12:03:08.092100  622913 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 12:03:08.092225  622913 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 12:03:08.092336  622913 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 12:03:08.100328  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 12:03:08.119806  622913 start.go:296] duration metric: took 161.868607ms for postStartSetup
	I1213 12:03:08.119893  622913 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 12:03:08.119935  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:08.137272  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:08.240715  622913 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 12:03:08.245595  622913 fix.go:56] duration metric: took 4.963017027s for fixHost
	I1213 12:03:08.245624  622913 start.go:83] releasing machines lock for "no-preload-307409", held for 4.963070517s
	I1213 12:03:08.245713  622913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 12:03:08.262782  622913 ssh_runner.go:195] Run: cat /version.json
	I1213 12:03:08.262844  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:08.263126  622913 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 12:03:08.263189  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:08.283140  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:08.296409  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:08.391353  622913 ssh_runner.go:195] Run: systemctl --version
	I1213 12:03:08.484408  622913 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 12:03:08.531460  622913 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 12:03:08.537034  622913 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 12:03:08.537102  622913 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 12:03:08.548165  622913 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 12:03:08.548229  622913 start.go:496] detecting cgroup driver to use...
	I1213 12:03:08.548280  622913 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 12:03:08.548375  622913 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 12:03:08.564936  622913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 12:03:08.579568  622913 docker.go:218] disabling cri-docker service (if available) ...
	I1213 12:03:08.579670  622913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 12:03:08.596861  622913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 12:03:08.610443  622913 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 12:03:08.718052  622913 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 12:03:08.841997  622913 docker.go:234] disabling docker service ...
	I1213 12:03:08.842083  622913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 12:03:08.857246  622913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 12:03:08.871656  622913 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 12:03:09.021847  622913 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 12:03:09.148277  622913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 12:03:09.162720  622913 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 12:03:09.178582  622913 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 12:03:09.178712  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.188481  622913 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 12:03:09.188600  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.198182  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.207488  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.217314  622913 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 12:03:09.225728  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.234602  622913 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.243163  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.251840  622913 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 12:03:09.261376  622913 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 12:03:09.269241  622913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:03:09.408118  622913 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 12:03:09.582010  622913 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 12:03:09.582116  622913 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 12:03:09.586129  622913 start.go:564] Will wait 60s for crictl version
	I1213 12:03:09.586218  622913 ssh_runner.go:195] Run: which crictl
	I1213 12:03:09.589880  622913 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 12:03:09.617198  622913 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 12:03:09.617307  622913 ssh_runner.go:195] Run: crio --version
	I1213 12:03:09.648039  622913 ssh_runner.go:195] Run: crio --version
	I1213 12:03:09.680132  622913 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 12:03:09.683104  622913 cli_runner.go:164] Run: docker network inspect no-preload-307409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 12:03:09.699119  622913 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 12:03:09.703132  622913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:03:09.712888  622913 kubeadm.go:884] updating cluster {Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 12:03:09.713027  622913 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 12:03:09.713074  622913 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 12:03:09.749883  622913 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 12:03:09.749906  622913 cache_images.go:86] Images are preloaded, skipping loading
	I1213 12:03:09.749914  622913 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 12:03:09.750028  622913 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-307409 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 12:03:09.750104  622913 ssh_runner.go:195] Run: crio config
	I1213 12:03:09.812957  622913 cni.go:84] Creating CNI manager for ""
	I1213 12:03:09.812981  622913 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 12:03:09.813006  622913 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 12:03:09.813030  622913 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-307409 NodeName:no-preload-307409 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 12:03:09.813160  622913 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-307409"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 12:03:09.813240  622913 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 12:03:09.821482  622913 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 12:03:09.821552  622913 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 12:03:09.830108  622913 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 12:03:09.842772  622913 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 12:03:09.855539  622913 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 12:03:09.868438  622913 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 12:03:09.871940  622913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:03:09.881527  622913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:03:09.994807  622913 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:03:10.018299  622913 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409 for IP: 192.168.85.2
	I1213 12:03:10.018324  622913 certs.go:195] generating shared ca certs ...
	I1213 12:03:10.018341  622913 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:03:10.018485  622913 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 12:03:10.018546  622913 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 12:03:10.018560  622913 certs.go:257] generating profile certs ...
	I1213 12:03:10.018675  622913 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key
	I1213 12:03:10.018739  622913 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b
	I1213 12:03:10.018788  622913 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key
	I1213 12:03:10.018902  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 12:03:10.018945  622913 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 12:03:10.018958  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 12:03:10.018984  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 12:03:10.019011  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 12:03:10.019049  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 12:03:10.019107  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 12:03:10.019800  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 12:03:10.070011  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 12:03:10.106991  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 12:03:10.124508  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 12:03:10.141854  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 12:03:10.159596  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 12:03:10.177143  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 12:03:10.193680  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 12:03:10.212540  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 12:03:10.230850  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 12:03:10.247982  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 12:03:10.265265  622913 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 12:03:10.280828  622913 ssh_runner.go:195] Run: openssl version
	I1213 12:03:10.287915  622913 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:03:10.295295  622913 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 12:03:10.302777  622913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:03:10.306712  622913 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:03:10.306788  622913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:03:10.347657  622913 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 12:03:10.355488  622913 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 12:03:10.362741  622913 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 12:03:10.370213  622913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 12:03:10.373963  622913 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 12:03:10.374024  622913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 12:03:10.415846  622913 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 12:03:10.423114  622913 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 12:03:10.430238  622913 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 12:03:10.437700  622913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 12:03:10.441526  622913 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 12:03:10.441626  622913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 12:03:10.482660  622913 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 12:03:10.490193  622913 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 12:03:10.493922  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 12:03:10.537559  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 12:03:10.580339  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 12:03:10.624474  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 12:03:10.668005  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 12:03:10.719243  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 12:03:10.787031  622913 kubeadm.go:401] StartCluster: {Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:03:10.787127  622913 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 12:03:10.787194  622913 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 12:03:10.866441  622913 cri.go:89] found id: ""
	I1213 12:03:10.866517  622913 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 12:03:10.878947  622913 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 12:03:10.878971  622913 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 12:03:10.879029  622913 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 12:03:10.887787  622913 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 12:03:10.888361  622913 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-307409" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:03:10.888611  622913 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-354468/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-307409" cluster setting kubeconfig missing "no-preload-307409" context setting]
	I1213 12:03:10.889058  622913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:03:10.890426  622913 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 12:03:10.898823  622913 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 12:03:10.898859  622913 kubeadm.go:602] duration metric: took 19.881679ms to restartPrimaryControlPlane
	I1213 12:03:10.898869  622913 kubeadm.go:403] duration metric: took 111.848044ms to StartCluster
	I1213 12:03:10.898903  622913 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:03:10.899000  622913 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:03:10.900707  622913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:03:10.900965  622913 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 12:03:10.901208  622913 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:03:10.901250  622913 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 12:03:10.901316  622913 addons.go:70] Setting storage-provisioner=true in profile "no-preload-307409"
	I1213 12:03:10.901329  622913 addons.go:239] Setting addon storage-provisioner=true in "no-preload-307409"
	I1213 12:03:10.901354  622913 host.go:66] Checking if "no-preload-307409" exists ...
	I1213 12:03:10.901796  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:10.902330  622913 addons.go:70] Setting dashboard=true in profile "no-preload-307409"
	I1213 12:03:10.902349  622913 addons.go:239] Setting addon dashboard=true in "no-preload-307409"
	W1213 12:03:10.902356  622913 addons.go:248] addon dashboard should already be in state true
	I1213 12:03:10.902383  622913 host.go:66] Checking if "no-preload-307409" exists ...
	I1213 12:03:10.902788  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:10.906749  622913 addons.go:70] Setting default-storageclass=true in profile "no-preload-307409"
	I1213 12:03:10.907002  622913 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-307409"
	I1213 12:03:10.907925  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:10.908085  622913 out.go:179] * Verifying Kubernetes components...
	I1213 12:03:10.911613  622913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:03:10.936135  622913 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 12:03:10.936200  622913 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 12:03:10.939926  622913 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 12:03:10.940040  622913 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:10.940057  622913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 12:03:10.940121  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:10.942800  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 12:03:10.942825  622913 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 12:03:10.942890  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:10.947265  622913 addons.go:239] Setting addon default-storageclass=true in "no-preload-307409"
	I1213 12:03:10.947306  622913 host.go:66] Checking if "no-preload-307409" exists ...
	I1213 12:03:10.947819  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:11.005750  622913 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 12:03:11.005772  622913 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 12:03:11.005782  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:11.005838  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:11.023641  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:11.041145  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:11.111003  622913 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:03:11.173593  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:11.173636  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 12:03:11.173654  622913 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 12:03:11.188163  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 12:03:11.188185  622913 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 12:03:11.213443  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 12:03:11.213508  622913 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 12:03:11.227236  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 12:03:11.230811  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 12:03:11.230883  622913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 12:03:11.251133  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 12:03:11.251205  622913 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 12:03:11.292200  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 12:03:11.292226  622913 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 12:03:11.305259  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 12:03:11.305283  622913 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 12:03:11.318210  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 12:03:11.318236  622913 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 12:03:11.331855  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:03:11.331882  622913 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 12:03:11.346399  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:11.535442  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:11.535581  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.535629  622913 retry.go:31] will retry after 290.823808ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.535633  622913 retry.go:31] will retry after 252.781045ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.535694  622913 node_ready.go:35] waiting up to 6m0s for node "no-preload-307409" to be "Ready" ...
	W1213 12:03:11.536032  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.536057  622913 retry.go:31] will retry after 294.061208ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.788663  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:11.827131  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 12:03:11.830443  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:11.858572  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.858608  622913 retry.go:31] will retry after 534.111043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:11.903268  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.903302  622913 retry.go:31] will retry after 517.641227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:11.928403  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.928440  622913 retry.go:31] will retry after 261.246628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.190196  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:12.253861  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.253905  622913 retry.go:31] will retry after 750.097801ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.392854  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:12.421390  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:12.466046  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.466119  622913 retry.go:31] will retry after 345.117349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:12.494512  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.494543  622913 retry.go:31] will retry after 582.433152ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.811477  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:12.872208  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.872254  622913 retry.go:31] will retry after 1.066115266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.004542  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:03:09.273871  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:09.773704  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:10.273974  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:10.773144  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:11.273093  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:11.773168  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:12.273119  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:12.773938  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:13.274064  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:13.285062  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:13.346306  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.346338  620795 retry.go:31] will retry after 9.878335415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.773923  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:13.077848  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:13.142906  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.142942  622913 retry.go:31] will retry after 477.26404ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:13.177073  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.177107  622913 retry.go:31] will retry after 558.594273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:13.536929  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:13.621309  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:13.684925  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.684962  622913 retry.go:31] will retry after 887.0827ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.735891  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:13.838454  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.838488  622913 retry.go:31] will retry after 1.840863262s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.938866  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:13.997740  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.997780  622913 retry.go:31] will retry after 1.50758238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:14.572279  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:14.649792  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:14.649830  622913 retry.go:31] will retry after 2.273525411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:15.505555  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:15.537094  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:15.566161  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:15.566200  622913 retry.go:31] will retry after 1.268984334s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:15.680410  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:15.739773  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:15.739804  622913 retry.go:31] will retry after 2.516127735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.835378  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:16.919361  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.919396  622913 retry.go:31] will retry after 2.060639493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.923603  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:16.987685  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.987717  622913 retry.go:31] will retry after 3.014723999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:18.037172  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:14.273845  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:14.773934  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:15.273954  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:15.774017  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:16.273243  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:16.748013  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:03:16.773600  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:16.899498  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.899555  620795 retry.go:31] will retry after 7.173965376s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:17.273146  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:17.504219  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:17.614341  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:17.614369  620795 retry.go:31] will retry after 8.805046452s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:17.773767  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:18.273931  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:18.773442  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:18.256769  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:18.385179  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:18.385215  622913 retry.go:31] will retry after 1.545787463s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:18.980290  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:19.083283  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:19.083326  622913 retry.go:31] will retry after 3.363160165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:19.931900  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:19.994541  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:19.994572  622913 retry.go:31] will retry after 3.448577935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:20.003109  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:20.075345  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:20.075383  622913 retry.go:31] will retry after 2.247696448s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:20.536209  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:22.323733  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:22.390042  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:22.390078  622913 retry.go:31] will retry after 4.701837343s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:22.447431  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:22.510069  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:22.510101  622913 retry.go:31] will retry after 8.996063036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:22.536655  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:19.273647  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:19.773235  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:20.273783  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:20.774109  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:21.273100  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:21.774041  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:22.273187  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:22.773919  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:23.224947  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:23.273354  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:23.287102  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:23.287132  620795 retry.go:31] will retry after 17.975754277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:23.774029  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:24.073794  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:24.135298  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:24.135337  620795 retry.go:31] will retry after 17.719019377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:23.443398  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:23.501606  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:23.501640  622913 retry.go:31] will retry after 3.90534406s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:24.537114  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:27.036285  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:27.092481  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:27.162031  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:27.162065  622913 retry.go:31] will retry after 11.355394108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:27.407221  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:27.478522  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:27.478557  622913 retry.go:31] will retry after 8.009668822s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:24.273481  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:24.773666  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:25.273142  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:25.773170  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:26.273652  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:26.420263  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:26.478183  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:26.478224  620795 retry.go:31] will retry after 20.903659468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:26.773685  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:27.273113  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:27.773126  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:28.273297  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:28.773524  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:29.537044  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:31.506350  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:31.537137  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:31.567063  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:31.567101  622913 retry.go:31] will retry after 5.348365924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:29.273854  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:29.773973  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:30.273040  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:30.773142  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:31.273258  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:31.773723  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:32.274053  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:32.774024  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:33.273125  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:33.773200  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:33.537277  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:35.488997  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:35.615701  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:35.615734  622913 retry.go:31] will retry after 18.593547057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:36.036633  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:36.916463  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:36.985838  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:36.985870  622913 retry.go:31] will retry after 7.879856322s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:34.273224  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:34.773126  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:35.273423  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:35.773837  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:36.273251  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:36.773088  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:37.273142  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:37.773099  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:38.273954  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:38.773678  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:38.518385  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:38.536542  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:38.629558  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:38.629596  622913 retry.go:31] will retry after 11.083764817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:40.537112  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:43.037066  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:39.273565  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:39.773916  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:40.274028  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:40.773120  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:41.263107  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:41.273658  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:41.328103  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:41.328152  620795 retry.go:31] will retry after 24.557962123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:41.773949  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:41.855229  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:41.913722  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:41.913758  620795 retry.go:31] will retry after 29.657634591s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:42.273168  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:42.773137  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:43.273064  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:43.773040  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:44.866836  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:44.926788  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:44.926822  622913 retry.go:31] will retry after 12.537177434s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:45.536544  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:47.537056  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:44.273531  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:44.773694  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:45.273864  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:45.773153  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:46.273336  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:46.773222  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:47.273977  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:47.382145  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:47.444684  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:47.444761  620795 retry.go:31] will retry after 14.939941469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:47.773125  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:48.273113  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:48.773715  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:49.714461  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:49.810126  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:49.810163  622913 retry.go:31] will retry after 17.034686012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:50.037110  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:52.537099  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:49.274132  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:49.773105  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:50.273278  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:50.773375  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:51.273108  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:51.773957  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:52.273086  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:52.773220  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:53.273134  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:53.773528  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:54.210466  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:54.276658  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:54.276693  622913 retry.go:31] will retry after 15.477790737s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:55.037124  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:57.464704  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:57.536423  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:57.546896  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:57.546941  622913 retry.go:31] will retry after 45.136010492s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:54.273748  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:54.773661  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:55.273945  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:55.773185  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:56.273156  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:56.773921  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:57.273352  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:03:57.273425  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:03:57.360759  620795 cri.go:89] found id: ""
	I1213 12:03:57.360784  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.360793  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:03:57.360799  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:03:57.360899  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:03:57.386673  620795 cri.go:89] found id: ""
	I1213 12:03:57.386699  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.386709  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:03:57.386715  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:03:57.386772  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:03:57.412179  620795 cri.go:89] found id: ""
	I1213 12:03:57.412202  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.412211  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:03:57.412217  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:03:57.412275  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:03:57.440758  620795 cri.go:89] found id: ""
	I1213 12:03:57.440782  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.440791  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:03:57.440797  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:03:57.440863  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:03:57.474164  620795 cri.go:89] found id: ""
	I1213 12:03:57.474189  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.474198  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:03:57.474205  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:03:57.474266  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:03:57.513790  620795 cri.go:89] found id: ""
	I1213 12:03:57.513811  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.513820  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:03:57.513826  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:03:57.513882  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:03:57.549685  620795 cri.go:89] found id: ""
	I1213 12:03:57.549708  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.549716  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:03:57.549723  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:03:57.549784  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:03:57.575809  620795 cri.go:89] found id: ""
	I1213 12:03:57.575830  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.575839  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:03:57.575848  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:03:57.575860  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:03:57.645191  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:03:57.645229  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:03:57.662016  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:03:57.662048  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:03:57.724395  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:03:57.715919    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.716483    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.718246    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.718931    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.720750    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:03:57.715919    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.716483    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.718246    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.718931    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.720750    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:03:57.724433  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:03:57.724446  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:03:57.752976  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:03:57.753012  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:04:00.036301  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:02.037075  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:00.282268  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:00.369064  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:00.369151  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:00.446224  620795 cri.go:89] found id: ""
	I1213 12:04:00.446257  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.446267  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:00.446274  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:00.446398  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:00.492701  620795 cri.go:89] found id: ""
	I1213 12:04:00.492728  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.492737  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:00.492744  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:00.492814  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:00.537493  620795 cri.go:89] found id: ""
	I1213 12:04:00.537573  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.537600  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:00.537617  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:00.537703  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:00.567417  620795 cri.go:89] found id: ""
	I1213 12:04:00.567457  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.567467  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:00.567493  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:00.567660  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:00.597259  620795 cri.go:89] found id: ""
	I1213 12:04:00.597333  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.597358  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:00.597371  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:00.597453  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:00.624935  620795 cri.go:89] found id: ""
	I1213 12:04:00.625008  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.625032  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:00.625053  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:00.625125  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:00.656802  620795 cri.go:89] found id: ""
	I1213 12:04:00.656830  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.656846  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:00.656853  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:00.656924  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:00.684243  620795 cri.go:89] found id: ""
	I1213 12:04:00.684318  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.684342  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:00.684364  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:00.684406  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:00.755205  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:00.755244  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:00.772314  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:00.772345  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:00.841157  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:00.832743    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.833321    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.835282    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.835830    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.836909    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:00.832743    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.833321    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.835282    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.835830    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.836909    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:00.841236  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:00.841257  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:00.870321  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:00.870357  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:02.384998  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:04:02.445321  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:02.445354  620795 retry.go:31] will retry after 47.283712675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:03.403559  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:03.414405  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:03.414472  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:03.440207  620795 cri.go:89] found id: ""
	I1213 12:04:03.440275  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.440299  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:03.440320  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:03.440406  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:03.473860  620795 cri.go:89] found id: ""
	I1213 12:04:03.473906  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.473916  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:03.473923  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:03.474005  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:03.500069  620795 cri.go:89] found id: ""
	I1213 12:04:03.500102  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.500111  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:03.500118  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:03.500194  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:03.550253  620795 cri.go:89] found id: ""
	I1213 12:04:03.550329  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.550353  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:03.550372  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:03.550459  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:03.595628  620795 cri.go:89] found id: ""
	I1213 12:04:03.595713  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.595737  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:03.595757  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:03.595871  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:03.626718  620795 cri.go:89] found id: ""
	I1213 12:04:03.626796  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.626827  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:03.626849  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:03.626954  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:03.657254  620795 cri.go:89] found id: ""
	I1213 12:04:03.657281  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.657290  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:03.657297  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:03.657356  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:03.682193  620795 cri.go:89] found id: ""
	I1213 12:04:03.682268  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.682292  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:03.682315  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:03.682355  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:03.750002  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:03.741882    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.742330    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.743987    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.744602    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.746402    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:03.741882    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.742330    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.743987    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.744602    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.746402    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:03.750025  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:03.750039  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:03.779008  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:03.779046  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:03.807344  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:03.807424  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:03.879158  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:03.879201  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:04:04.537094  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:06.845581  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:04:06.913058  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:06.913091  622913 retry.go:31] will retry after 30.701510805s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:07.036960  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:05.886355  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:04:05.944754  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:05.944842  620795 retry.go:31] will retry after 33.803790372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:06.397350  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:06.407918  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:06.407990  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:06.436013  620795 cri.go:89] found id: ""
	I1213 12:04:06.436040  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.436049  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:06.436056  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:06.436121  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:06.462051  620795 cri.go:89] found id: ""
	I1213 12:04:06.462074  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.462083  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:06.462089  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:06.462147  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:06.487916  620795 cri.go:89] found id: ""
	I1213 12:04:06.487943  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.487952  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:06.487959  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:06.488027  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:06.514150  620795 cri.go:89] found id: ""
	I1213 12:04:06.514181  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.514190  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:06.514196  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:06.514255  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:06.567862  620795 cri.go:89] found id: ""
	I1213 12:04:06.567900  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.567910  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:06.567917  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:06.567977  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:06.615399  620795 cri.go:89] found id: ""
	I1213 12:04:06.615428  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.615446  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:06.615453  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:06.615546  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:06.645078  620795 cri.go:89] found id: ""
	I1213 12:04:06.645150  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.645174  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:06.645196  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:06.645278  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:06.673976  620795 cri.go:89] found id: ""
	I1213 12:04:06.674002  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.674011  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:06.674022  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:06.674067  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:06.703467  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:06.703504  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:06.731693  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:06.731721  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:06.801110  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:06.801154  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:06.817774  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:06.817804  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:06.899087  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:06.890513    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.891812    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.893652    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.893965    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.895397    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:06.890513    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.891812    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.893652    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.893965    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.895397    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:04:09.536141  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:09.755504  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:04:09.840522  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:09.840549  622913 retry.go:31] will retry after 18.501787354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:11.536619  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:09.400132  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:09.410430  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:09.410500  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:09.440067  620795 cri.go:89] found id: ""
	I1213 12:04:09.440090  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.440100  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:09.440107  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:09.440167  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:09.470041  620795 cri.go:89] found id: ""
	I1213 12:04:09.470062  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.470071  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:09.470078  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:09.470135  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:09.496421  620795 cri.go:89] found id: ""
	I1213 12:04:09.496444  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.496453  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:09.496459  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:09.496516  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:09.535210  620795 cri.go:89] found id: ""
	I1213 12:04:09.535233  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.535241  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:09.535248  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:09.535322  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:09.593867  620795 cri.go:89] found id: ""
	I1213 12:04:09.593894  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.593905  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:09.593912  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:09.593967  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:09.633869  620795 cri.go:89] found id: ""
	I1213 12:04:09.633895  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.633904  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:09.633911  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:09.633967  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:09.660082  620795 cri.go:89] found id: ""
	I1213 12:04:09.660104  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.660113  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:09.660119  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:09.660180  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:09.686975  620795 cri.go:89] found id: ""
	I1213 12:04:09.687005  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.687013  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:09.687023  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:09.687035  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:09.756960  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:09.756994  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:09.779895  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:09.779929  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:09.858208  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:09.850094    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.850752    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.852494    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.853050    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.854767    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:09.850094    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.850752    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.852494    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.853050    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.854767    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:09.858229  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:09.858243  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:09.886438  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:09.886472  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:11.571741  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:04:11.635299  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:11.635338  620795 retry.go:31] will retry after 28.848947099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:12.418247  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:12.428921  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:12.428996  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:12.453422  620795 cri.go:89] found id: ""
	I1213 12:04:12.453447  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.453455  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:12.453462  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:12.453523  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:12.482791  620795 cri.go:89] found id: ""
	I1213 12:04:12.482818  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.482827  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:12.482834  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:12.482892  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:12.509185  620795 cri.go:89] found id: ""
	I1213 12:04:12.509207  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.509216  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:12.509222  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:12.509281  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:12.555782  620795 cri.go:89] found id: ""
	I1213 12:04:12.555810  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.555820  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:12.555868  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:12.555953  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:12.609661  620795 cri.go:89] found id: ""
	I1213 12:04:12.609682  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.609691  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:12.609697  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:12.609753  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:12.636223  620795 cri.go:89] found id: ""
	I1213 12:04:12.636251  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.636268  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:12.636275  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:12.636335  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:12.663456  620795 cri.go:89] found id: ""
	I1213 12:04:12.663484  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.663493  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:12.663499  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:12.663583  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:12.688687  620795 cri.go:89] found id: ""
	I1213 12:04:12.688714  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.688723  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:12.688733  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:12.688745  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:12.705209  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:12.705240  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:12.766977  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:12.758035    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.758936    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.760623    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.761225    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.762917    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:12.758035    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.758936    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.760623    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.761225    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.762917    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:12.767041  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:12.767064  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:12.795358  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:12.795396  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:12.823112  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:12.823143  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:04:14.037178  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:16.536405  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:15.388432  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:15.398781  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:15.398905  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:15.425880  620795 cri.go:89] found id: ""
	I1213 12:04:15.425920  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.425929  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:15.425935  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:15.426005  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:15.451424  620795 cri.go:89] found id: ""
	I1213 12:04:15.451467  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.451477  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:15.451486  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:15.451583  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:15.476481  620795 cri.go:89] found id: ""
	I1213 12:04:15.476525  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.476534  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:15.476541  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:15.476612  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:15.502062  620795 cri.go:89] found id: ""
	I1213 12:04:15.502088  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.502097  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:15.502104  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:15.502173  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:15.588057  620795 cri.go:89] found id: ""
	I1213 12:04:15.588132  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.588155  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:15.588175  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:15.588279  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:15.616479  620795 cri.go:89] found id: ""
	I1213 12:04:15.616506  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.616519  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:15.616526  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:15.616602  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:15.649712  620795 cri.go:89] found id: ""
	I1213 12:04:15.649789  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.649813  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:15.649827  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:15.649912  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:15.675926  620795 cri.go:89] found id: ""
	I1213 12:04:15.675995  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.676019  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:15.676034  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:15.676049  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:15.692725  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:15.692755  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:15.759900  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:15.751635    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.752539    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.754270    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.754749    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.756378    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:15.751635    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.752539    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.754270    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.754749    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.756378    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:15.759963  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:15.759989  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:15.789315  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:15.789425  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:15.818647  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:15.818675  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:18.385812  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:18.396389  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:18.396461  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:18.422777  620795 cri.go:89] found id: ""
	I1213 12:04:18.422800  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.422808  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:18.422814  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:18.422873  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:18.448579  620795 cri.go:89] found id: ""
	I1213 12:04:18.448607  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.448616  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:18.448622  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:18.448677  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:18.474629  620795 cri.go:89] found id: ""
	I1213 12:04:18.474707  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.474744  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:18.474768  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:18.474859  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:18.499793  620795 cri.go:89] found id: ""
	I1213 12:04:18.499819  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.499828  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:18.499837  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:18.499894  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:18.531333  620795 cri.go:89] found id: ""
	I1213 12:04:18.531368  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.531377  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:18.531383  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:18.531450  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:18.583893  620795 cri.go:89] found id: ""
	I1213 12:04:18.583923  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.583932  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:18.583939  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:18.584008  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:18.620082  620795 cri.go:89] found id: ""
	I1213 12:04:18.620120  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.620129  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:18.620135  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:18.620210  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:18.647112  620795 cri.go:89] found id: ""
	I1213 12:04:18.647137  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.647145  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:18.647155  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:18.647167  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:18.712791  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:18.712833  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:18.728892  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:18.728920  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:18.793078  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:18.784898    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.785594    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.787226    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.787863    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.789553    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:18.784898    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.785594    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.787226    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.787863    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.789553    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:18.793150  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:18.793172  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:18.821911  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:18.821947  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:04:18.537035  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:20.537076  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:23.036959  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:21.353995  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:21.364153  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:21.364265  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:21.389593  620795 cri.go:89] found id: ""
	I1213 12:04:21.389673  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.389690  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:21.389698  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:21.389773  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:21.418684  620795 cri.go:89] found id: ""
	I1213 12:04:21.418706  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.418715  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:21.418722  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:21.418778  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:21.442724  620795 cri.go:89] found id: ""
	I1213 12:04:21.442799  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.442822  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:21.442841  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:21.442927  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:21.472117  620795 cri.go:89] found id: ""
	I1213 12:04:21.472141  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.472150  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:21.472156  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:21.472213  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:21.501589  620795 cri.go:89] found id: ""
	I1213 12:04:21.501612  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.501621  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:21.501627  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:21.501688  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:21.563954  620795 cri.go:89] found id: ""
	I1213 12:04:21.564023  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.564046  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:21.564069  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:21.564151  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:21.612229  620795 cri.go:89] found id: ""
	I1213 12:04:21.612263  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.612273  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:21.612280  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:21.612339  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:21.639602  620795 cri.go:89] found id: ""
	I1213 12:04:21.639636  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.639645  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:21.639655  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:21.639669  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:21.705516  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:21.705552  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:21.722491  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:21.722521  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:21.783641  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:21.775744    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.776319    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.777813    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.778191    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.779744    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:21.775744    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.776319    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.777813    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.778191    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.779744    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:21.783663  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:21.783676  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:21.811307  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:21.811340  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:04:25.037157  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:27.037243  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:24.340508  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:24.351403  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:24.351482  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:24.382302  620795 cri.go:89] found id: ""
	I1213 12:04:24.382379  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.382404  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:24.382425  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:24.382538  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:24.408839  620795 cri.go:89] found id: ""
	I1213 12:04:24.408862  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.408871  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:24.408878  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:24.408936  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:24.435623  620795 cri.go:89] found id: ""
	I1213 12:04:24.435651  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.435661  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:24.435667  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:24.435727  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:24.461121  620795 cri.go:89] found id: ""
	I1213 12:04:24.461149  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.461158  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:24.461165  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:24.461251  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:24.486111  620795 cri.go:89] found id: ""
	I1213 12:04:24.486144  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.486153  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:24.486176  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:24.486257  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:24.511493  620795 cri.go:89] found id: ""
	I1213 12:04:24.511567  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.511578  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:24.511585  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:24.511646  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:24.546004  620795 cri.go:89] found id: ""
	I1213 12:04:24.546029  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.546052  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:24.546059  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:24.546129  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:24.573601  620795 cri.go:89] found id: ""
	I1213 12:04:24.573677  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.573699  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:24.573720  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:24.573758  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:24.651738  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:24.651779  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:24.669002  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:24.669035  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:24.734744  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:24.726695    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.727312    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.729032    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.729495    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.731022    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:24.726695    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.727312    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.729032    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.729495    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.731022    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:24.734767  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:24.734780  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:24.763652  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:24.763687  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:27.296287  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:27.306558  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:27.306632  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:27.331288  620795 cri.go:89] found id: ""
	I1213 12:04:27.331315  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.331324  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:27.331331  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:27.331388  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:27.357587  620795 cri.go:89] found id: ""
	I1213 12:04:27.357611  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.357620  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:27.357626  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:27.357681  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:27.383604  620795 cri.go:89] found id: ""
	I1213 12:04:27.383628  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.383637  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:27.383644  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:27.383699  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:27.408104  620795 cri.go:89] found id: ""
	I1213 12:04:27.408183  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.408199  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:27.408207  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:27.408273  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:27.434284  620795 cri.go:89] found id: ""
	I1213 12:04:27.434309  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.434318  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:27.434325  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:27.434389  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:27.459356  620795 cri.go:89] found id: ""
	I1213 12:04:27.459382  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.459391  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:27.459399  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:27.459457  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:27.484476  620795 cri.go:89] found id: ""
	I1213 12:04:27.484543  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.484558  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:27.484565  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:27.484630  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:27.510910  620795 cri.go:89] found id: ""
	I1213 12:04:27.510937  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.510946  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:27.510955  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:27.510967  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:27.543054  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:27.543085  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:27.641750  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:27.634259    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.634796    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.636509    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.637087    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.638180    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:27.634259    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.634796    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.636509    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.637087    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.638180    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:27.641818  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:27.641838  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:27.671375  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:27.671412  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:27.701704  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:27.701735  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:28.342721  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:04:28.405775  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:28.405881  622913 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 12:04:29.536294  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:31.536581  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:30.268871  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:30.279472  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:30.279561  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:30.305479  620795 cri.go:89] found id: ""
	I1213 12:04:30.305504  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.305513  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:30.305520  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:30.305577  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:30.330879  620795 cri.go:89] found id: ""
	I1213 12:04:30.330904  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.330914  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:30.330920  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:30.330978  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:30.358794  620795 cri.go:89] found id: ""
	I1213 12:04:30.358821  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.358830  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:30.358837  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:30.358899  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:30.384574  620795 cri.go:89] found id: ""
	I1213 12:04:30.384648  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.384662  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:30.384669  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:30.384728  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:30.409348  620795 cri.go:89] found id: ""
	I1213 12:04:30.409374  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.409383  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:30.409390  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:30.409460  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:30.435261  620795 cri.go:89] found id: ""
	I1213 12:04:30.435286  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.435295  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:30.435302  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:30.435357  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:30.459810  620795 cri.go:89] found id: ""
	I1213 12:04:30.459834  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.459843  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:30.459849  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:30.459906  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:30.485697  620795 cri.go:89] found id: ""
	I1213 12:04:30.485720  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.485728  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:30.485738  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:30.485749  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:30.513499  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:30.513534  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:30.574739  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:30.574767  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:30.658042  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:30.658078  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:30.678263  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:30.678291  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:30.741695  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:30.733736    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.734524    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.736026    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.736488    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.737955    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:30.733736    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.734524    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.736026    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.736488    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.737955    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:33.242096  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:33.253053  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:33.253146  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:33.279722  620795 cri.go:89] found id: ""
	I1213 12:04:33.279748  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.279756  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:33.279764  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:33.279820  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:33.306092  620795 cri.go:89] found id: ""
	I1213 12:04:33.306129  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.306139  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:33.306163  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:33.306252  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:33.332772  620795 cri.go:89] found id: ""
	I1213 12:04:33.332796  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.332813  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:33.332819  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:33.332882  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:33.367716  620795 cri.go:89] found id: ""
	I1213 12:04:33.367744  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.367754  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:33.367760  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:33.367822  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:33.400175  620795 cri.go:89] found id: ""
	I1213 12:04:33.400242  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.400258  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:33.400266  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:33.400325  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:33.424852  620795 cri.go:89] found id: ""
	I1213 12:04:33.424877  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.424887  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:33.424894  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:33.424984  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:33.453556  620795 cri.go:89] found id: ""
	I1213 12:04:33.453581  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.453590  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:33.453597  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:33.453653  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:33.479131  620795 cri.go:89] found id: ""
	I1213 12:04:33.479156  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.479165  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:33.479175  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:33.479187  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:33.549906  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:33.550637  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:33.572706  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:33.572863  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:33.662497  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:33.653770    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.654281    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.656228    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.656866    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.658492    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:33.653770    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.654281    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.656228    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.656866    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.658492    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:33.662522  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:33.662535  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:33.692067  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:33.692111  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:04:33.536622  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:36.036352  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:37.615506  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:04:37.688522  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:37.688627  622913 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 12:04:38.037102  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:36.220187  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:36.230829  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:36.230906  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:36.260247  620795 cri.go:89] found id: ""
	I1213 12:04:36.260271  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.260280  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:36.260286  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:36.260342  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:36.285940  620795 cri.go:89] found id: ""
	I1213 12:04:36.285973  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.285982  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:36.285988  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:36.286059  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:36.311531  620795 cri.go:89] found id: ""
	I1213 12:04:36.311553  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.311561  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:36.311568  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:36.311633  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:36.336755  620795 cri.go:89] found id: ""
	I1213 12:04:36.336849  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.336865  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:36.336873  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:36.336933  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:36.361652  620795 cri.go:89] found id: ""
	I1213 12:04:36.361676  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.361684  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:36.361690  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:36.361748  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:36.392507  620795 cri.go:89] found id: ""
	I1213 12:04:36.392530  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.392539  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:36.392545  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:36.392601  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:36.418503  620795 cri.go:89] found id: ""
	I1213 12:04:36.418526  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.418535  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:36.418540  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:36.418614  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:36.444832  620795 cri.go:89] found id: ""
	I1213 12:04:36.444856  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.444865  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:36.444874  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:36.444891  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:36.515523  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:36.515566  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:36.535671  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:36.535699  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:36.655383  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:36.646224    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.647083    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.648816    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.649375    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.651021    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:36.646224    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.647083    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.648816    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.649375    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.651021    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:36.655406  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:36.655421  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:36.684176  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:36.684212  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:39.215366  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:39.225843  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:39.225914  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	W1213 12:04:40.037338  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:42.538150  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:42.683554  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:04:42.744769  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:42.744869  622913 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 12:04:42.747993  622913 out.go:179] * Enabled addons: 
	I1213 12:04:42.750740  622913 addons.go:530] duration metric: took 1m31.849485278s for enable addons: enabled=[]
	I1213 12:04:39.251825  620795 cri.go:89] found id: ""
	I1213 12:04:39.251850  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.251860  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:39.251867  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:39.251927  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:39.280966  620795 cri.go:89] found id: ""
	I1213 12:04:39.280991  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.281000  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:39.281007  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:39.281063  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:39.305488  620795 cri.go:89] found id: ""
	I1213 12:04:39.305511  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.305520  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:39.305526  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:39.305583  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:39.330461  620795 cri.go:89] found id: ""
	I1213 12:04:39.330484  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.330493  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:39.330500  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:39.330556  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:39.355410  620795 cri.go:89] found id: ""
	I1213 12:04:39.355483  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.355507  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:39.355565  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:39.355706  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:39.384890  620795 cri.go:89] found id: ""
	I1213 12:04:39.384916  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.384926  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:39.384933  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:39.385017  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:39.409735  620795 cri.go:89] found id: ""
	I1213 12:04:39.409758  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.409767  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:39.409773  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:39.409833  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:39.439648  620795 cri.go:89] found id: ""
	I1213 12:04:39.439673  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.439685  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:39.439695  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:39.439706  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:39.505768  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:39.505803  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:39.525572  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:39.525602  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:39.624619  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:39.616542    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.617459    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.619080    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.619382    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.620943    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:39.616542    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.617459    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.619080    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.619382    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.620943    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:39.624643  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:39.624656  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:39.653269  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:39.653306  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:39.749621  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:04:39.805957  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:39.806064  620795 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 12:04:40.484759  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:04:40.549677  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:40.549776  620795 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 12:04:42.182348  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:42.195718  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:42.195860  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:42.224999  620795 cri.go:89] found id: ""
	I1213 12:04:42.225044  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.225058  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:42.225067  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:42.225192  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:42.254835  620795 cri.go:89] found id: ""
	I1213 12:04:42.254913  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.254949  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:42.254975  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:42.255077  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:42.283814  620795 cri.go:89] found id: ""
	I1213 12:04:42.283889  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.283916  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:42.283931  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:42.284014  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:42.315795  620795 cri.go:89] found id: ""
	I1213 12:04:42.315823  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.315859  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:42.315871  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:42.315954  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:42.342987  620795 cri.go:89] found id: ""
	I1213 12:04:42.343026  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.343035  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:42.343042  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:42.343114  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:42.368935  620795 cri.go:89] found id: ""
	I1213 12:04:42.368969  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.368978  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:42.368986  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:42.369052  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:42.398633  620795 cri.go:89] found id: ""
	I1213 12:04:42.398703  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.398727  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:42.398747  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:42.398834  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:42.424223  620795 cri.go:89] found id: ""
	I1213 12:04:42.424299  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.424324  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:42.424342  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:42.424367  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:42.453160  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:42.453198  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:42.486810  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:42.486840  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:42.567003  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:42.567043  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:42.606556  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:42.606591  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:42.678272  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:42.669759    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.670194    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.671849    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.672446    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.673383    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:42.669759    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.670194    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.671849    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.672446    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.673383    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:04:45.037213  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:47.536268  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:45.178582  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:45.193685  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:45.193792  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:45.236374  620795 cri.go:89] found id: ""
	I1213 12:04:45.236402  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.236411  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:45.236419  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:45.236487  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:45.279160  620795 cri.go:89] found id: ""
	I1213 12:04:45.279193  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.279203  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:45.279210  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:45.279281  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:45.308966  620795 cri.go:89] found id: ""
	I1213 12:04:45.308991  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.309000  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:45.309006  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:45.309065  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:45.337083  620795 cri.go:89] found id: ""
	I1213 12:04:45.337110  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.337119  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:45.337126  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:45.337212  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:45.366596  620795 cri.go:89] found id: ""
	I1213 12:04:45.366619  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.366628  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:45.366635  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:45.366694  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:45.391548  620795 cri.go:89] found id: ""
	I1213 12:04:45.391572  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.391581  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:45.391588  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:45.391649  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:45.418598  620795 cri.go:89] found id: ""
	I1213 12:04:45.418619  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.418628  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:45.418635  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:45.418700  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:45.448270  620795 cri.go:89] found id: ""
	I1213 12:04:45.448292  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.448301  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:45.448310  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:45.448321  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:45.478882  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:45.478907  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:45.548829  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:45.548916  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:45.567213  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:45.567382  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:45.681775  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:45.673956    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.674517    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.676147    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.676639    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.678185    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:45.673956    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.674517    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.676147    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.676639    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.678185    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:45.681800  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:45.681816  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:48.211634  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:48.222293  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:48.222364  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:48.249683  620795 cri.go:89] found id: ""
	I1213 12:04:48.249707  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.249715  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:48.249722  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:48.249785  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:48.277977  620795 cri.go:89] found id: ""
	I1213 12:04:48.277999  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.278009  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:48.278015  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:48.278072  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:48.304052  620795 cri.go:89] found id: ""
	I1213 12:04:48.304080  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.304089  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:48.304096  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:48.304153  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:48.334039  620795 cri.go:89] found id: ""
	I1213 12:04:48.334066  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.334075  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:48.334087  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:48.334151  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:48.364623  620795 cri.go:89] found id: ""
	I1213 12:04:48.364646  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.364654  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:48.364661  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:48.364723  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:48.389613  620795 cri.go:89] found id: ""
	I1213 12:04:48.389684  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.389707  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:48.389718  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:48.389797  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:48.418439  620795 cri.go:89] found id: ""
	I1213 12:04:48.418467  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.418477  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:48.418485  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:48.418544  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:48.446312  620795 cri.go:89] found id: ""
	I1213 12:04:48.446341  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.446350  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:48.446360  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:48.446372  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:48.463031  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:48.463116  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:48.558736  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:48.546104    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.546489    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.550180    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.550521    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.554948    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:48.546104    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.546489    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.550180    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.550521    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.554948    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:48.558767  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:48.558782  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:48.606808  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:48.606885  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:48.638169  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:48.638199  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:49.729332  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:04:49.791669  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:49.791778  620795 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 12:04:49.794717  620795 out.go:179] * Enabled addons: 
	W1213 12:04:50.037029  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:52.037265  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:49.797659  620795 addons.go:530] duration metric: took 1m53.008142261s for enable addons: enabled=[]
	I1213 12:04:51.210580  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:51.221809  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:51.221877  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:51.247182  620795 cri.go:89] found id: ""
	I1213 12:04:51.247259  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.247282  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:51.247301  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:51.247396  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:51.275541  620795 cri.go:89] found id: ""
	I1213 12:04:51.275608  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.275623  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:51.275631  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:51.275695  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:51.300774  620795 cri.go:89] found id: ""
	I1213 12:04:51.300866  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.300889  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:51.300902  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:51.300973  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:51.330039  620795 cri.go:89] found id: ""
	I1213 12:04:51.330064  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.330074  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:51.330080  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:51.330152  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:51.358455  620795 cri.go:89] found id: ""
	I1213 12:04:51.358482  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.358491  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:51.358497  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:51.358556  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:51.387907  620795 cri.go:89] found id: ""
	I1213 12:04:51.387933  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.387942  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:51.387948  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:51.388011  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:51.414050  620795 cri.go:89] found id: ""
	I1213 12:04:51.414075  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.414084  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:51.414091  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:51.414148  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:51.440682  620795 cri.go:89] found id: ""
	I1213 12:04:51.440715  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.440729  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:51.440739  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:51.440752  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:51.502275  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:51.494090    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.494838    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.496561    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.497152    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.498687    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:51.494090    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.494838    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.496561    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.497152    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.498687    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:51.502296  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:51.502308  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:51.533683  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:51.533722  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:51.590439  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:51.590468  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:51.668678  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:51.668719  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:54.186166  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:54.196649  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:54.196718  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:54.221630  620795 cri.go:89] found id: ""
	I1213 12:04:54.221656  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.221665  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:54.221672  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:54.221729  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1213 12:04:54.537026  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:56.537082  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:54.246332  620795 cri.go:89] found id: ""
	I1213 12:04:54.246354  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.246362  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:54.246368  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:54.246425  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:54.274363  620795 cri.go:89] found id: ""
	I1213 12:04:54.274385  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.274396  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:54.274405  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:54.274465  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:54.299013  620795 cri.go:89] found id: ""
	I1213 12:04:54.299036  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.299045  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:54.299051  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:54.299115  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:54.325098  620795 cri.go:89] found id: ""
	I1213 12:04:54.325123  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.325133  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:54.325140  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:54.325200  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:54.350290  620795 cri.go:89] found id: ""
	I1213 12:04:54.350318  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.350327  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:54.350334  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:54.350394  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:54.377186  620795 cri.go:89] found id: ""
	I1213 12:04:54.377209  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.377218  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:54.377224  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:54.377283  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:54.409137  620795 cri.go:89] found id: ""
	I1213 12:04:54.409164  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.409174  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:54.409184  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:54.409196  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:54.426177  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:54.426207  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:54.491873  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:54.483806    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.484379    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.486107    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.486575    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.488295    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:54.483806    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.484379    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.486107    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.486575    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.488295    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:54.491896  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:54.491909  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:54.521061  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:54.521153  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:54.580593  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:54.580623  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:57.166168  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:57.177178  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:57.177255  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:57.209135  620795 cri.go:89] found id: ""
	I1213 12:04:57.209170  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.209179  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:57.209186  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:57.209254  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:57.236323  620795 cri.go:89] found id: ""
	I1213 12:04:57.236359  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.236368  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:57.236375  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:57.236433  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:57.261970  620795 cri.go:89] found id: ""
	I1213 12:04:57.261992  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.262001  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:57.262007  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:57.262064  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:57.287149  620795 cri.go:89] found id: ""
	I1213 12:04:57.287171  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.287179  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:57.287186  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:57.287242  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:57.312282  620795 cri.go:89] found id: ""
	I1213 12:04:57.312307  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.312316  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:57.312322  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:57.312380  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:57.341454  620795 cri.go:89] found id: ""
	I1213 12:04:57.341480  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.341489  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:57.341496  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:57.341559  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:57.366694  620795 cri.go:89] found id: ""
	I1213 12:04:57.366718  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.366729  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:57.366736  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:57.366795  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:57.392434  620795 cri.go:89] found id: ""
	I1213 12:04:57.392459  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.392468  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:57.392478  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:57.392490  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:57.426595  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:57.426622  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:57.490950  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:57.490984  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:57.508294  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:57.508326  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:57.637638  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:57.628307    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.629849    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.630282    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.632060    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.632815    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:57.628307    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.629849    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.630282    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.632060    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.632815    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:57.637717  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:57.637746  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:04:59.037033  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:01.536339  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:00.166037  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:00.211490  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:00.212114  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:00.294178  620795 cri.go:89] found id: ""
	I1213 12:05:00.294201  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.294210  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:00.294217  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:00.294285  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:00.376480  620795 cri.go:89] found id: ""
	I1213 12:05:00.376506  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.376516  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:00.376523  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:00.376593  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:00.416213  620795 cri.go:89] found id: ""
	I1213 12:05:00.416240  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.416250  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:00.416261  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:00.416329  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:00.449590  620795 cri.go:89] found id: ""
	I1213 12:05:00.449620  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.449629  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:00.449637  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:00.449722  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:00.479461  620795 cri.go:89] found id: ""
	I1213 12:05:00.479486  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.479495  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:00.479502  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:00.479589  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:00.509094  620795 cri.go:89] found id: ""
	I1213 12:05:00.509123  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.509132  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:00.509138  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:00.509204  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:00.583923  620795 cri.go:89] found id: ""
	I1213 12:05:00.583952  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.583962  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:00.583969  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:00.584049  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:00.624268  620795 cri.go:89] found id: ""
	I1213 12:05:00.624299  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.624309  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:00.624322  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:00.624334  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:00.701394  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:00.692593    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.693524    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.695465    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.695924    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.697491    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:00.692593    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.693524    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.695465    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.695924    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.697491    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:00.701419  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:00.701432  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:00.730125  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:00.730170  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:00.760465  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:00.760494  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:00.826577  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:00.826619  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:03.345642  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:03.359010  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:03.359082  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:03.391792  620795 cri.go:89] found id: ""
	I1213 12:05:03.391816  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.391825  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:03.391832  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:03.391889  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:03.418730  620795 cri.go:89] found id: ""
	I1213 12:05:03.418759  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.418768  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:03.418774  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:03.418831  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:03.447034  620795 cri.go:89] found id: ""
	I1213 12:05:03.447062  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.447070  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:03.447077  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:03.447137  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:03.471737  620795 cri.go:89] found id: ""
	I1213 12:05:03.471763  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.471772  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:03.471778  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:03.471832  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:03.496618  620795 cri.go:89] found id: ""
	I1213 12:05:03.496641  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.496650  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:03.496656  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:03.496721  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:03.538834  620795 cri.go:89] found id: ""
	I1213 12:05:03.538855  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.538901  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:03.538915  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:03.539006  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:03.577353  620795 cri.go:89] found id: ""
	I1213 12:05:03.577375  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.577437  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:03.577445  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:03.577590  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:03.613163  620795 cri.go:89] found id: ""
	I1213 12:05:03.613234  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.613247  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:03.613257  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:03.613296  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:03.652148  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:03.652174  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:03.718838  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:03.718879  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:03.736159  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:03.736189  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:03.801478  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:03.792834    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.793250    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.794944    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.795726    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.797245    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:03.792834    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.793250    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.794944    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.795726    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.797245    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:03.801504  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:03.801519  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:05:03.537034  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:06.036238  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:08.037112  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:06.330711  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:06.341136  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:06.341246  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:06.366066  620795 cri.go:89] found id: ""
	I1213 12:05:06.366099  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.366108  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:06.366114  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:06.366178  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:06.394525  620795 cri.go:89] found id: ""
	I1213 12:05:06.394563  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.394573  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:06.394580  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:06.394649  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:06.424244  620795 cri.go:89] found id: ""
	I1213 12:05:06.424312  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.424336  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:06.424357  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:06.424449  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:06.450497  620795 cri.go:89] found id: ""
	I1213 12:05:06.450529  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.450538  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:06.450545  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:06.450614  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:06.475735  620795 cri.go:89] found id: ""
	I1213 12:05:06.475759  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.475768  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:06.475774  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:06.475835  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:06.501224  620795 cri.go:89] found id: ""
	I1213 12:05:06.501248  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.501257  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:06.501263  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:06.501322  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:06.548385  620795 cri.go:89] found id: ""
	I1213 12:05:06.548410  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.548419  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:06.548425  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:06.548498  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:06.613365  620795 cri.go:89] found id: ""
	I1213 12:05:06.613444  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.613469  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:06.613490  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:06.613525  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:06.642036  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:06.642067  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:06.675194  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:06.675218  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:06.743889  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:06.743933  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:06.760968  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:06.761004  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:06.828998  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:06.821066    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.821670    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.823321    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.823818    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.825418    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:06.821066    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.821670    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.823321    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.823818    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.825418    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:05:10.037152  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:12.536415  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:09.329981  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:09.340577  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:09.340644  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:09.368902  620795 cri.go:89] found id: ""
	I1213 12:05:09.368926  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.368935  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:09.368941  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:09.369004  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:09.397232  620795 cri.go:89] found id: ""
	I1213 12:05:09.397263  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.397273  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:09.397280  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:09.397353  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:09.424425  620795 cri.go:89] found id: ""
	I1213 12:05:09.424455  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.424465  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:09.424471  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:09.424529  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:09.449435  620795 cri.go:89] found id: ""
	I1213 12:05:09.449457  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.449466  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:09.449472  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:09.449534  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:09.473489  620795 cri.go:89] found id: ""
	I1213 12:05:09.473512  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.473521  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:09.473527  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:09.473584  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:09.503533  620795 cri.go:89] found id: ""
	I1213 12:05:09.503560  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.503569  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:09.503576  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:09.503632  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:09.569217  620795 cri.go:89] found id: ""
	I1213 12:05:09.569286  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.569312  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:09.569331  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:09.569431  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:09.616563  620795 cri.go:89] found id: ""
	I1213 12:05:09.616632  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.616663  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:09.616686  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:09.616726  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:09.645190  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:09.645217  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:09.710725  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:09.710760  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:09.727200  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:09.727231  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:09.793579  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:09.785934    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.786467    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.787974    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.788480    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.790090    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:09.785934    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.786467    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.787974    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.788480    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.790090    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:09.793611  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:09.793625  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:12.321617  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:12.332442  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:12.332517  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:12.357812  620795 cri.go:89] found id: ""
	I1213 12:05:12.357835  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.357844  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:12.357851  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:12.357912  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:12.383803  620795 cri.go:89] found id: ""
	I1213 12:05:12.383827  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.383836  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:12.383842  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:12.383902  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:12.408966  620795 cri.go:89] found id: ""
	I1213 12:05:12.409044  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.409061  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:12.409069  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:12.409183  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:12.438466  620795 cri.go:89] found id: ""
	I1213 12:05:12.438491  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.438499  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:12.438506  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:12.438562  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:12.468347  620795 cri.go:89] found id: ""
	I1213 12:05:12.468375  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.468385  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:12.468391  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:12.468455  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:12.493833  620795 cri.go:89] found id: ""
	I1213 12:05:12.493860  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.493869  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:12.493876  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:12.493936  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:12.540091  620795 cri.go:89] found id: ""
	I1213 12:05:12.540120  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.540130  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:12.540137  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:12.540202  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:12.593138  620795 cri.go:89] found id: ""
	I1213 12:05:12.593165  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.593174  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:12.593184  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:12.593195  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:12.670751  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:12.670790  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:12.688162  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:12.688196  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:12.753953  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:12.745930    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.746540    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.748217    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.748692    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.750290    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:12.745930    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.746540    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.748217    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.748692    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.750290    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:12.753978  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:12.753990  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:12.782410  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:12.782447  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:05:14.537113  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:17.037129  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:15.314766  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:15.325177  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:15.325244  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:15.350233  620795 cri.go:89] found id: ""
	I1213 12:05:15.350259  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.350269  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:15.350276  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:15.350332  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:15.375095  620795 cri.go:89] found id: ""
	I1213 12:05:15.375121  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.375131  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:15.375138  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:15.375198  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:15.400509  620795 cri.go:89] found id: ""
	I1213 12:05:15.400531  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.400539  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:15.400545  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:15.400604  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:15.429727  620795 cri.go:89] found id: ""
	I1213 12:05:15.429749  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.429758  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:15.429765  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:15.429818  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:15.455300  620795 cri.go:89] found id: ""
	I1213 12:05:15.455321  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.455330  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:15.455336  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:15.455393  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:15.480516  620795 cri.go:89] found id: ""
	I1213 12:05:15.480540  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.480549  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:15.480556  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:15.480617  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:15.508281  620795 cri.go:89] found id: ""
	I1213 12:05:15.508358  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.508375  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:15.508382  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:15.508453  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:15.569260  620795 cri.go:89] found id: ""
	I1213 12:05:15.569286  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.569295  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:15.569304  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:15.569317  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:15.653590  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:15.653630  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:15.670770  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:15.670805  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:15.734152  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:15.725752    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.726494    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.728223    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.728860    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.730656    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:15.725752    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.726494    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.728223    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.728860    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.730656    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:15.734221  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:15.734248  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:15.762906  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:15.762941  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:18.292789  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:18.303334  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:18.303410  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:18.329348  620795 cri.go:89] found id: ""
	I1213 12:05:18.329372  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.329382  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:18.329389  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:18.329455  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:18.358617  620795 cri.go:89] found id: ""
	I1213 12:05:18.358638  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.358647  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:18.358653  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:18.358710  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:18.383565  620795 cri.go:89] found id: ""
	I1213 12:05:18.383589  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.383597  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:18.383603  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:18.383666  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:18.409351  620795 cri.go:89] found id: ""
	I1213 12:05:18.409378  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.409387  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:18.409394  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:18.409456  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:18.435771  620795 cri.go:89] found id: ""
	I1213 12:05:18.435797  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.435806  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:18.435813  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:18.435875  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:18.464513  620795 cri.go:89] found id: ""
	I1213 12:05:18.464539  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.464549  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:18.464556  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:18.464659  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:18.490219  620795 cri.go:89] found id: ""
	I1213 12:05:18.490244  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.490252  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:18.490260  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:18.490317  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:18.532969  620795 cri.go:89] found id: ""
	I1213 12:05:18.532995  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.533004  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:18.533013  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:18.533027  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:18.595123  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:18.595154  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:18.672161  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:18.672201  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:18.689194  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:18.689222  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:18.754503  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:18.745575    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.746298    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.748026    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.748666    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.750610    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:18.745575    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.746298    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.748026    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.748666    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.750610    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:18.754526  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:18.754539  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:05:19.537079  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:22.037194  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:21.283365  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:21.294092  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:21.294183  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:21.321526  620795 cri.go:89] found id: ""
	I1213 12:05:21.321549  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.321559  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:21.321565  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:21.321622  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:21.349919  620795 cri.go:89] found id: ""
	I1213 12:05:21.349943  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.349952  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:21.349958  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:21.350021  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:21.379881  620795 cri.go:89] found id: ""
	I1213 12:05:21.379906  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.379915  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:21.379922  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:21.379982  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:21.405656  620795 cri.go:89] found id: ""
	I1213 12:05:21.405679  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.405687  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:21.405694  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:21.405754  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:21.435716  620795 cri.go:89] found id: ""
	I1213 12:05:21.435752  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.435762  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:21.435769  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:21.435839  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:21.461176  620795 cri.go:89] found id: ""
	I1213 12:05:21.461199  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.461207  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:21.461214  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:21.461271  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:21.487321  620795 cri.go:89] found id: ""
	I1213 12:05:21.487357  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.487366  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:21.487372  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:21.487438  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:21.513663  620795 cri.go:89] found id: ""
	I1213 12:05:21.513687  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.513696  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:21.513706  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:21.513740  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:21.547538  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:21.547713  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:21.648986  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:21.641895    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.642288    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.643954    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.644494    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.645453    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:21.641895    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.642288    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.643954    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.644494    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.645453    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:21.649007  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:21.649020  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:21.676895  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:21.676929  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:21.706237  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:21.706268  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:05:24.536202  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:26.537127  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:24.271406  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:24.281916  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:24.281984  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:24.306547  620795 cri.go:89] found id: ""
	I1213 12:05:24.306570  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.306579  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:24.306586  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:24.306645  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:24.334194  620795 cri.go:89] found id: ""
	I1213 12:05:24.334218  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.334227  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:24.334234  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:24.334291  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:24.360113  620795 cri.go:89] found id: ""
	I1213 12:05:24.360139  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.360148  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:24.360154  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:24.360219  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:24.385854  620795 cri.go:89] found id: ""
	I1213 12:05:24.385879  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.385889  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:24.385896  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:24.385960  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:24.411999  620795 cri.go:89] found id: ""
	I1213 12:05:24.412025  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.412034  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:24.412042  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:24.412102  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:24.438300  620795 cri.go:89] found id: ""
	I1213 12:05:24.438325  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.438335  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:24.438347  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:24.438405  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:24.464325  620795 cri.go:89] found id: ""
	I1213 12:05:24.464351  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.464361  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:24.464369  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:24.464430  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:24.491896  620795 cri.go:89] found id: ""
	I1213 12:05:24.491920  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.491930  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:24.491939  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:24.491971  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:24.519363  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:24.519445  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:24.616473  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:24.616502  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:24.692608  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:24.692645  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:24.711650  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:24.711689  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:24.775602  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:24.767043    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.768309    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.769606    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.770273    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.771935    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:24.767043    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.768309    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.769606    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.770273    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.771935    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:27.275849  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:27.286597  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:27.286680  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:27.311787  620795 cri.go:89] found id: ""
	I1213 12:05:27.311813  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.311822  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:27.311829  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:27.311893  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:27.341056  620795 cri.go:89] found id: ""
	I1213 12:05:27.341123  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.341146  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:27.341160  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:27.341233  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:27.365944  620795 cri.go:89] found id: ""
	I1213 12:05:27.365978  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.365986  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:27.365993  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:27.366057  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:27.390576  620795 cri.go:89] found id: ""
	I1213 12:05:27.390611  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.390626  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:27.390633  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:27.390702  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:27.420415  620795 cri.go:89] found id: ""
	I1213 12:05:27.420439  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.420448  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:27.420454  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:27.420516  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:27.445745  620795 cri.go:89] found id: ""
	I1213 12:05:27.445812  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.445835  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:27.445853  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:27.445936  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:27.475470  620795 cri.go:89] found id: ""
	I1213 12:05:27.475508  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.475538  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:27.475547  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:27.475615  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:27.502195  620795 cri.go:89] found id: ""
	I1213 12:05:27.502222  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.502231  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:27.502240  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:27.502252  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:27.597636  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:27.597744  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:27.629736  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:27.629763  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:27.694305  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:27.686679    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.687417    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.688918    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.689354    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.690840    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:27.686679    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.687417    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.688918    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.689354    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.690840    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:27.694327  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:27.694339  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:27.723090  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:27.723129  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:05:29.037051  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:31.536823  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:30.253217  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:30.264373  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:30.264446  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:30.290413  620795 cri.go:89] found id: ""
	I1213 12:05:30.290440  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.290450  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:30.290457  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:30.290517  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:30.318052  620795 cri.go:89] found id: ""
	I1213 12:05:30.318079  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.318096  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:30.318104  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:30.318172  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:30.343233  620795 cri.go:89] found id: ""
	I1213 12:05:30.343267  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.343277  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:30.343283  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:30.343349  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:30.373053  620795 cri.go:89] found id: ""
	I1213 12:05:30.373077  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.373086  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:30.373092  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:30.373149  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:30.401783  620795 cri.go:89] found id: ""
	I1213 12:05:30.401862  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.401879  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:30.401886  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:30.401955  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:30.427557  620795 cri.go:89] found id: ""
	I1213 12:05:30.427580  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.427589  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:30.427595  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:30.427652  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:30.452324  620795 cri.go:89] found id: ""
	I1213 12:05:30.452404  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.452426  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:30.452445  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:30.452538  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:30.485213  620795 cri.go:89] found id: ""
	I1213 12:05:30.485283  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.485307  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:30.485325  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:30.485337  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:30.567099  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:30.571250  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:30.599905  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:30.599987  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:30.671402  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:30.663820    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.664552    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.665833    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.666310    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.667892    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:30.663820    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.664552    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.665833    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.666310    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.667892    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:30.671475  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:30.671544  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:30.700275  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:30.700310  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:33.229307  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:33.240030  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:33.240101  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:33.264516  620795 cri.go:89] found id: ""
	I1213 12:05:33.264540  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.264550  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:33.264557  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:33.264622  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:33.288665  620795 cri.go:89] found id: ""
	I1213 12:05:33.288694  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.288704  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:33.288711  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:33.288772  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:33.318238  620795 cri.go:89] found id: ""
	I1213 12:05:33.318314  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.318338  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:33.318356  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:33.318437  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:33.342548  620795 cri.go:89] found id: ""
	I1213 12:05:33.342582  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.342592  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:33.342598  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:33.342667  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:33.368791  620795 cri.go:89] found id: ""
	I1213 12:05:33.368814  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.368823  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:33.368829  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:33.368887  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:33.395218  620795 cri.go:89] found id: ""
	I1213 12:05:33.395254  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.395263  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:33.395270  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:33.395342  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:33.422228  620795 cri.go:89] found id: ""
	I1213 12:05:33.422263  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.422272  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:33.422279  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:33.422345  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:33.448101  620795 cri.go:89] found id: ""
	I1213 12:05:33.448126  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.448136  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:33.448146  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:33.448164  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:33.513958  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:33.513995  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:33.536519  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:33.536547  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:33.642718  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:33.634504    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.635083    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.636790    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.637471    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.638479    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:33.634504    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.635083    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.636790    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.637471    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.638479    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:33.642742  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:33.642757  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:33.671233  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:33.671268  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:05:34.036325  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:36.536291  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:36.205718  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:36.216490  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:36.216599  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:36.242239  620795 cri.go:89] found id: ""
	I1213 12:05:36.242267  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.242277  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:36.242284  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:36.242345  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:36.267114  620795 cri.go:89] found id: ""
	I1213 12:05:36.267140  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.267149  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:36.267155  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:36.267221  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:36.292484  620795 cri.go:89] found id: ""
	I1213 12:05:36.292510  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.292519  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:36.292525  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:36.292586  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:36.317342  620795 cri.go:89] found id: ""
	I1213 12:05:36.317365  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.317374  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:36.317380  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:36.317442  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:36.346675  620795 cri.go:89] found id: ""
	I1213 12:05:36.346746  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.346770  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:36.346788  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:36.346878  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:36.374350  620795 cri.go:89] found id: ""
	I1213 12:05:36.374416  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.374440  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:36.374459  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:36.374550  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:36.401836  620795 cri.go:89] found id: ""
	I1213 12:05:36.401904  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.401927  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:36.401947  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:36.402023  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:36.436530  620795 cri.go:89] found id: ""
	I1213 12:05:36.436612  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.436635  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:36.436653  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:36.436680  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:36.464595  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:36.464663  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:36.550070  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:36.550121  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:36.581383  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:36.581414  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:36.674763  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:36.666501    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.667311    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.668765    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.669457    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.671114    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:36.666501    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.667311    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.668765    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.669457    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.671114    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:36.674830  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:36.674854  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:39.203663  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:39.214134  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:39.214211  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	W1213 12:05:39.036349  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:41.036401  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:43.037206  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:39.240674  620795 cri.go:89] found id: ""
	I1213 12:05:39.240705  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.240714  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:39.240721  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:39.240786  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:39.265873  620795 cri.go:89] found id: ""
	I1213 12:05:39.265895  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.265903  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:39.265909  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:39.265966  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:39.291928  620795 cri.go:89] found id: ""
	I1213 12:05:39.291952  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.291960  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:39.291978  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:39.292037  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:39.317111  620795 cri.go:89] found id: ""
	I1213 12:05:39.317144  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.317153  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:39.317160  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:39.317219  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:39.341971  620795 cri.go:89] found id: ""
	I1213 12:05:39.341993  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.342002  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:39.342009  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:39.342065  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:39.370095  620795 cri.go:89] found id: ""
	I1213 12:05:39.370166  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.370192  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:39.370212  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:39.370297  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:39.396661  620795 cri.go:89] found id: ""
	I1213 12:05:39.396740  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.396765  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:39.396777  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:39.396855  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:39.426139  620795 cri.go:89] found id: ""
	I1213 12:05:39.426167  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.426177  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:39.426188  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:39.426199  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:39.458970  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:39.459002  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:39.525484  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:39.525523  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:39.554066  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:39.554149  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:39.647487  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:39.639358    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.640049    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.641742    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.642473    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.644045    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:39.639358    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.640049    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.641742    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.642473    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.644045    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:39.647508  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:39.647543  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:42.175675  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:42.189064  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:42.189149  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:42.220105  620795 cri.go:89] found id: ""
	I1213 12:05:42.220135  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.220156  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:42.220164  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:42.220229  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:42.250459  620795 cri.go:89] found id: ""
	I1213 12:05:42.250486  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.250495  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:42.250502  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:42.250570  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:42.278746  620795 cri.go:89] found id: ""
	I1213 12:05:42.278773  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.278785  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:42.278793  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:42.278855  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:42.307046  620795 cri.go:89] found id: ""
	I1213 12:05:42.307073  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.307083  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:42.307092  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:42.307153  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:42.335010  620795 cri.go:89] found id: ""
	I1213 12:05:42.335035  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.335046  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:42.335052  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:42.335114  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:42.362128  620795 cri.go:89] found id: ""
	I1213 12:05:42.362154  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.362163  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:42.362170  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:42.362231  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:42.396146  620795 cri.go:89] found id: ""
	I1213 12:05:42.396175  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.396186  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:42.396193  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:42.396254  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:42.423111  620795 cri.go:89] found id: ""
	I1213 12:05:42.423137  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.423146  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:42.423155  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:42.423167  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:42.440295  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:42.440325  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:42.504038  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:42.496153    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.496984    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.498582    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.499023    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.500536    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:42.496153    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.496984    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.498582    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.499023    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.500536    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:42.504059  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:42.504071  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:42.550928  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:42.550966  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:42.608904  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:42.608935  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:05:45.037527  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:47.536245  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:45.181124  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:45.197731  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:45.197873  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:45.246027  620795 cri.go:89] found id: ""
	I1213 12:05:45.246070  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.246081  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:45.246106  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:45.246220  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:45.279332  620795 cri.go:89] found id: ""
	I1213 12:05:45.279388  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.279398  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:45.279404  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:45.279509  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:45.314910  620795 cri.go:89] found id: ""
	I1213 12:05:45.314988  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.315000  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:45.315010  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:45.315114  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:45.343055  620795 cri.go:89] found id: ""
	I1213 12:05:45.343130  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.343153  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:45.343175  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:45.343282  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:45.370166  620795 cri.go:89] found id: ""
	I1213 12:05:45.370240  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.370275  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:45.370299  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:45.370391  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:45.396456  620795 cri.go:89] found id: ""
	I1213 12:05:45.396480  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.396489  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:45.396495  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:45.396550  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:45.421687  620795 cri.go:89] found id: ""
	I1213 12:05:45.421711  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.421720  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:45.421726  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:45.421781  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:45.446648  620795 cri.go:89] found id: ""
	I1213 12:05:45.446672  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.446681  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:45.446691  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:45.446702  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:45.512020  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:45.512055  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:45.543051  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:45.543084  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:45.640767  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:45.633029    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.633452    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.634983    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.635597    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.637148    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:45.633029    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.633452    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.634983    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.635597    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.637148    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:45.640789  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:45.640802  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:45.670787  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:45.670822  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:48.201632  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:48.211975  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:48.212046  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:48.241331  620795 cri.go:89] found id: ""
	I1213 12:05:48.241355  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.241364  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:48.241371  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:48.241430  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:48.266481  620795 cri.go:89] found id: ""
	I1213 12:05:48.266506  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.266515  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:48.266523  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:48.266581  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:48.292562  620795 cri.go:89] found id: ""
	I1213 12:05:48.292587  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.292597  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:48.292604  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:48.292666  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:48.316829  620795 cri.go:89] found id: ""
	I1213 12:05:48.316853  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.316862  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:48.316869  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:48.316928  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:48.341279  620795 cri.go:89] found id: ""
	I1213 12:05:48.341304  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.341313  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:48.341320  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:48.341395  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:48.370602  620795 cri.go:89] found id: ""
	I1213 12:05:48.370668  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.370684  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:48.370692  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:48.370757  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:48.395975  620795 cri.go:89] found id: ""
	I1213 12:05:48.396001  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.396011  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:48.396017  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:48.396076  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:48.422104  620795 cri.go:89] found id: ""
	I1213 12:05:48.422129  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.422139  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:48.422150  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:48.422163  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:48.487414  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:48.487451  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:48.504893  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:48.504924  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:48.613440  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:48.605194    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.606037    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.607690    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.608269    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.609790    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:48.605194    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.606037    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.607690    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.608269    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.609790    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:48.613472  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:48.613485  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:48.643454  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:48.643496  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:05:49.537116  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:52.036281  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:51.173081  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:51.184091  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:51.184220  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:51.209714  620795 cri.go:89] found id: ""
	I1213 12:05:51.209741  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.209751  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:51.209757  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:51.209815  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:51.236381  620795 cri.go:89] found id: ""
	I1213 12:05:51.236414  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.236423  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:51.236429  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:51.236495  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:51.266394  620795 cri.go:89] found id: ""
	I1213 12:05:51.266428  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.266437  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:51.266443  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:51.266509  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:51.293949  620795 cri.go:89] found id: ""
	I1213 12:05:51.293981  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.293991  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:51.293998  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:51.294062  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:51.324019  620795 cri.go:89] found id: ""
	I1213 12:05:51.324042  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.324056  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:51.324062  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:51.324145  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:51.352992  620795 cri.go:89] found id: ""
	I1213 12:05:51.353023  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.353032  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:51.353039  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:51.353098  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:51.378872  620795 cri.go:89] found id: ""
	I1213 12:05:51.378898  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.378907  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:51.378914  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:51.378976  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:51.406670  620795 cri.go:89] found id: ""
	I1213 12:05:51.406695  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.406703  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:51.406713  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:51.406728  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:51.469269  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:51.461277    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.461921    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.463438    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.463899    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.465468    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:51.461277    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.461921    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.463438    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.463899    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.465468    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:51.469290  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:51.469304  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:51.497318  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:51.497352  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:51.534646  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:51.534680  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:51.618348  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:51.618388  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:54.137197  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:54.147708  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:54.147778  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:54.173064  620795 cri.go:89] found id: ""
	I1213 12:05:54.173089  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.173098  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:54.173105  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:54.173164  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:54.198688  620795 cri.go:89] found id: ""
	I1213 12:05:54.198713  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.198723  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:54.198733  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:54.198789  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:54.224472  620795 cri.go:89] found id: ""
	I1213 12:05:54.224497  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.224506  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:54.224512  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:54.224571  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1213 12:05:54.536956  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:56.537169  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:54.254875  620795 cri.go:89] found id: ""
	I1213 12:05:54.254900  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.254909  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:54.254916  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:54.254985  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:54.286287  620795 cri.go:89] found id: ""
	I1213 12:05:54.286314  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.286322  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:54.286329  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:54.286384  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:54.312009  620795 cri.go:89] found id: ""
	I1213 12:05:54.312034  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.312043  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:54.312050  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:54.312109  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:54.338472  620795 cri.go:89] found id: ""
	I1213 12:05:54.338506  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.338516  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:54.338522  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:54.338590  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:54.363767  620795 cri.go:89] found id: ""
	I1213 12:05:54.363791  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.363799  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:54.363810  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:54.363827  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:54.429426  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:54.429462  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:54.446820  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:54.446859  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:54.514113  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:54.505503    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.506092    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.507709    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.508420    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.510180    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:54.505503    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.506092    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.507709    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.508420    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.510180    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:54.514137  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:54.514150  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:54.547597  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:54.547688  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:57.126156  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:57.136777  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:57.136854  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:57.166084  620795 cri.go:89] found id: ""
	I1213 12:05:57.166107  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.166116  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:57.166122  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:57.166180  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:57.194344  620795 cri.go:89] found id: ""
	I1213 12:05:57.194368  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.194377  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:57.194384  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:57.194445  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:57.220264  620795 cri.go:89] found id: ""
	I1213 12:05:57.220289  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.220298  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:57.220305  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:57.220362  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:57.245200  620795 cri.go:89] found id: ""
	I1213 12:05:57.245222  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.245230  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:57.245236  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:57.245292  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:57.272963  620795 cri.go:89] found id: ""
	I1213 12:05:57.272987  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.272996  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:57.273003  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:57.273061  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:57.297916  620795 cri.go:89] found id: ""
	I1213 12:05:57.297940  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.297947  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:57.297954  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:57.298016  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:57.323201  620795 cri.go:89] found id: ""
	I1213 12:05:57.323226  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.323235  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:57.323241  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:57.323301  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:57.348727  620795 cri.go:89] found id: ""
	I1213 12:05:57.348759  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.348769  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:57.348779  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:57.348794  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:57.424991  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:57.416858    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.417506    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.419207    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.419713    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.421359    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:57.416858    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.417506    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.419207    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.419713    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.421359    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:57.425015  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:57.425027  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:57.454618  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:57.454652  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:57.482599  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:57.482627  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:57.556901  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:57.556982  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:05:58.537235  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:01.037253  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:00.078226  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:00.114729  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:00.114815  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:00.214510  620795 cri.go:89] found id: ""
	I1213 12:06:00.214537  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.214547  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:00.214560  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:00.214644  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:00.283401  620795 cri.go:89] found id: ""
	I1213 12:06:00.283433  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.283443  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:00.283450  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:00.283564  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:00.333853  620795 cri.go:89] found id: ""
	I1213 12:06:00.333946  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.333974  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:00.333999  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:00.334124  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:00.370564  620795 cri.go:89] found id: ""
	I1213 12:06:00.370647  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.370670  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:00.370693  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:00.370796  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:00.400318  620795 cri.go:89] found id: ""
	I1213 12:06:00.400355  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.400365  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:00.400373  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:00.400451  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:00.429349  620795 cri.go:89] found id: ""
	I1213 12:06:00.429376  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.429387  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:00.429394  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:00.429480  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:00.457513  620795 cri.go:89] found id: ""
	I1213 12:06:00.457540  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.457549  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:00.457555  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:00.457617  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:00.484050  620795 cri.go:89] found id: ""
	I1213 12:06:00.484077  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.484086  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:00.484096  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:00.484110  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:00.564314  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:00.564357  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:00.586853  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:00.586884  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:00.678609  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:00.670112    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.670780    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.672403    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.672752    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.674443    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:00.670112    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.670780    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.672403    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.672752    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.674443    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:00.678679  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:00.678699  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:00.708726  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:00.708764  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:03.239868  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:03.250271  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:03.250342  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:03.278221  620795 cri.go:89] found id: ""
	I1213 12:06:03.278246  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.278254  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:03.278261  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:03.278323  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:03.307255  620795 cri.go:89] found id: ""
	I1213 12:06:03.307280  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.307288  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:03.307295  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:03.307358  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:03.334371  620795 cri.go:89] found id: ""
	I1213 12:06:03.334394  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.334402  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:03.334408  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:03.334465  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:03.359920  620795 cri.go:89] found id: ""
	I1213 12:06:03.359947  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.359959  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:03.359966  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:03.360026  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:03.388349  620795 cri.go:89] found id: ""
	I1213 12:06:03.388373  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.388382  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:03.388389  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:03.388446  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:03.413684  620795 cri.go:89] found id: ""
	I1213 12:06:03.413712  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.413721  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:03.413727  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:03.413786  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:03.438590  620795 cri.go:89] found id: ""
	I1213 12:06:03.438613  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.438622  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:03.438629  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:03.438686  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:03.466031  620795 cri.go:89] found id: ""
	I1213 12:06:03.466065  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.466074  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:03.466084  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:03.466095  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:03.540002  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:03.540037  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:03.581254  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:03.581285  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:03.657609  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:03.648962    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.649736    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.651545    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.652112    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.653889    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:03.648962    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.649736    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.651545    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.652112    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.653889    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:03.657641  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:03.657654  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:03.686248  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:03.686284  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:06:03.537138  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:05.537188  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:07.537266  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:06.215254  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:06.226059  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:06.226130  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:06.252206  620795 cri.go:89] found id: ""
	I1213 12:06:06.252229  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.252237  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:06.252243  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:06.252306  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:06.282327  620795 cri.go:89] found id: ""
	I1213 12:06:06.282349  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.282358  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:06.282364  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:06.282425  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:06.312866  620795 cri.go:89] found id: ""
	I1213 12:06:06.312889  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.312898  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:06.312905  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:06.312964  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:06.339757  620795 cri.go:89] found id: ""
	I1213 12:06:06.339828  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.339851  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:06.339865  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:06.339937  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:06.366465  620795 cri.go:89] found id: ""
	I1213 12:06:06.366491  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.366508  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:06.366515  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:06.366589  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:06.395704  620795 cri.go:89] found id: ""
	I1213 12:06:06.395727  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.395735  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:06.395742  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:06.395800  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:06.420941  620795 cri.go:89] found id: ""
	I1213 12:06:06.420966  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.420974  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:06.420981  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:06.421040  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:06.446747  620795 cri.go:89] found id: ""
	I1213 12:06:06.446771  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.446781  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:06.446790  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:06.446802  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:06.515396  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:06.515437  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:06.537368  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:06.537458  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:06.638118  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:06.626710    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.630084    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.630705    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.632330    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.632805    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:06.626710    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.630084    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.630705    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.632330    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.632805    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:06.638202  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:06.638230  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:06.668749  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:06.668789  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:09.204205  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:09.214694  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:09.214763  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	W1213 12:06:10.037386  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:12.536953  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:09.240252  620795 cri.go:89] found id: ""
	I1213 12:06:09.240291  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.240301  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:09.240307  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:09.240372  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:09.267161  620795 cri.go:89] found id: ""
	I1213 12:06:09.267188  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.267197  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:09.267203  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:09.267263  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:09.292472  620795 cri.go:89] found id: ""
	I1213 12:06:09.292501  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.292510  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:09.292517  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:09.292581  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:09.317718  620795 cri.go:89] found id: ""
	I1213 12:06:09.317745  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.317754  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:09.317760  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:09.317819  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:09.342979  620795 cri.go:89] found id: ""
	I1213 12:06:09.343006  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.343015  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:09.343021  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:09.343080  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:09.370344  620795 cri.go:89] found id: ""
	I1213 12:06:09.370368  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.370377  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:09.370383  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:09.370441  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:09.397428  620795 cri.go:89] found id: ""
	I1213 12:06:09.397451  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.397461  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:09.397467  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:09.397527  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:09.422862  620795 cri.go:89] found id: ""
	I1213 12:06:09.422890  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.422900  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:09.422909  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:09.422923  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:09.486031  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:09.478519    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.478948    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.480477    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.480972    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.482466    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:09.478519    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.478948    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.480477    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.480972    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.482466    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:09.486057  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:09.486070  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:09.514736  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:09.514772  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:09.586482  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:09.586558  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:09.660422  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:09.660459  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:12.179299  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:12.190230  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:12.190302  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:12.216052  620795 cri.go:89] found id: ""
	I1213 12:06:12.216076  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.216085  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:12.216092  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:12.216150  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:12.245417  620795 cri.go:89] found id: ""
	I1213 12:06:12.245443  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.245453  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:12.245460  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:12.245525  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:12.272357  620795 cri.go:89] found id: ""
	I1213 12:06:12.272382  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.272391  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:12.272397  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:12.272459  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:12.297431  620795 cri.go:89] found id: ""
	I1213 12:06:12.297458  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.297467  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:12.297479  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:12.297537  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:12.322773  620795 cri.go:89] found id: ""
	I1213 12:06:12.322796  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.322805  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:12.322829  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:12.322894  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:12.348212  620795 cri.go:89] found id: ""
	I1213 12:06:12.348278  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.348293  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:12.348301  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:12.348360  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:12.378078  620795 cri.go:89] found id: ""
	I1213 12:06:12.378105  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.378115  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:12.378122  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:12.378186  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:12.403938  620795 cri.go:89] found id: ""
	I1213 12:06:12.404005  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.404029  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:12.404044  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:12.404056  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:12.432395  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:12.432433  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:12.465021  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:12.465055  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:12.533527  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:12.533564  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:12.557847  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:12.557876  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:12.649280  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:12.641558    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.641947    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.643630    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.644072    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.645646    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:12.641558    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.641947    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.643630    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.644072    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.645646    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:06:15.036244  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:17.037163  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:15.150199  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:15.161093  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:15.161164  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:15.188375  620795 cri.go:89] found id: ""
	I1213 12:06:15.188402  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.188411  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:15.188420  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:15.188494  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:15.213569  620795 cri.go:89] found id: ""
	I1213 12:06:15.213592  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.213601  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:15.213607  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:15.213667  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:15.244468  620795 cri.go:89] found id: ""
	I1213 12:06:15.244490  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.244499  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:15.244505  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:15.244565  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:15.269446  620795 cri.go:89] found id: ""
	I1213 12:06:15.269469  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.269478  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:15.269484  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:15.269544  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:15.297921  620795 cri.go:89] found id: ""
	I1213 12:06:15.297947  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.297957  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:15.297965  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:15.298029  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:15.323225  620795 cri.go:89] found id: ""
	I1213 12:06:15.323248  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.323256  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:15.323263  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:15.323322  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:15.349965  620795 cri.go:89] found id: ""
	I1213 12:06:15.349988  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.349999  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:15.350005  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:15.350067  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:15.378207  620795 cri.go:89] found id: ""
	I1213 12:06:15.378236  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.378247  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:15.378258  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:15.378271  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:15.443150  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:15.443182  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:15.459353  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:15.459388  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:15.546545  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:15.517236    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.519883    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.520609    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.528433    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.536550    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:15.517236    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.519883    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.520609    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.528433    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.536550    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:15.546611  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:15.546638  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:15.582173  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:15.582258  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:18.126037  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:18.137115  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:18.137190  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:18.164991  620795 cri.go:89] found id: ""
	I1213 12:06:18.165017  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.165026  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:18.165033  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:18.165092  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:18.191806  620795 cri.go:89] found id: ""
	I1213 12:06:18.191832  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.191841  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:18.191848  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:18.191906  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:18.222284  620795 cri.go:89] found id: ""
	I1213 12:06:18.222310  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.222320  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:18.222329  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:18.222389  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:18.250305  620795 cri.go:89] found id: ""
	I1213 12:06:18.250332  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.250342  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:18.250348  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:18.250406  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:18.276798  620795 cri.go:89] found id: ""
	I1213 12:06:18.276823  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.276833  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:18.276841  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:18.276901  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:18.301916  620795 cri.go:89] found id: ""
	I1213 12:06:18.301943  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.301952  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:18.301959  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:18.302017  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:18.327545  620795 cri.go:89] found id: ""
	I1213 12:06:18.327569  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.327577  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:18.327584  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:18.327681  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:18.352817  620795 cri.go:89] found id: ""
	I1213 12:06:18.352844  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.352854  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:18.352863  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:18.352902  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:18.418564  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:18.418601  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:18.434897  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:18.434928  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:18.499340  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:18.490649    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.491423    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.492978    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.493531    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.495112    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:18.490649    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.491423    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.492978    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.493531    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.495112    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:18.499366  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:18.499380  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:18.528897  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:18.528980  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:06:19.537261  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:22.037303  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:21.104122  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:21.114671  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:21.114786  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:21.140990  620795 cri.go:89] found id: ""
	I1213 12:06:21.141014  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.141024  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:21.141030  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:21.141087  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:21.168480  620795 cri.go:89] found id: ""
	I1213 12:06:21.168510  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.168519  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:21.168526  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:21.168583  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:21.193893  620795 cri.go:89] found id: ""
	I1213 12:06:21.193916  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.193924  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:21.193930  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:21.193985  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:21.222789  620795 cri.go:89] found id: ""
	I1213 12:06:21.222811  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.222820  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:21.222827  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:21.222885  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:21.254379  620795 cri.go:89] found id: ""
	I1213 12:06:21.254402  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.254411  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:21.254417  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:21.254476  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:21.280020  620795 cri.go:89] found id: ""
	I1213 12:06:21.280049  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.280058  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:21.280065  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:21.280123  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:21.305920  620795 cri.go:89] found id: ""
	I1213 12:06:21.305942  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.305952  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:21.305957  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:21.306031  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:21.334376  620795 cri.go:89] found id: ""
	I1213 12:06:21.334400  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.334409  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:21.334417  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:21.334429  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:21.362868  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:21.362906  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:21.397678  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:21.397727  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:21.465535  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:21.465574  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:21.482417  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:21.482443  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:21.566636  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:21.557499    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.558882    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.559834    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.561441    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.561752    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:21.557499    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.558882    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.559834    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.561441    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.561752    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:24.068339  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:24.079607  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:24.079684  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:24.105575  620795 cri.go:89] found id: ""
	I1213 12:06:24.105609  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.105619  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:24.105626  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:24.105696  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:24.131798  620795 cri.go:89] found id: ""
	I1213 12:06:24.131830  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.131840  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:24.131846  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:24.131905  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:24.157068  620795 cri.go:89] found id: ""
	I1213 12:06:24.157096  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.157106  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:24.157113  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:24.157168  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:24.186737  620795 cri.go:89] found id: ""
	I1213 12:06:24.186762  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.186772  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:24.186779  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:24.186843  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:24.214700  620795 cri.go:89] found id: ""
	I1213 12:06:24.214726  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.214745  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:24.214751  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:24.214815  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	W1213 12:06:24.537013  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:27.037104  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:24.242048  620795 cri.go:89] found id: ""
	I1213 12:06:24.242074  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.242083  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:24.242090  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:24.242180  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:24.270953  620795 cri.go:89] found id: ""
	I1213 12:06:24.270978  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.270987  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:24.270994  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:24.271074  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:24.296220  620795 cri.go:89] found id: ""
	I1213 12:06:24.296246  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.296256  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:24.296267  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:24.296278  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:24.325330  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:24.325367  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:24.355217  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:24.355255  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:24.421526  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:24.421566  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:24.438978  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:24.439012  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:24.514169  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:24.505564    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.506202    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.507961    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.508730    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.510229    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:24.505564    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.506202    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.507961    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.508730    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.510229    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:27.015192  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:27.026779  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:27.026871  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:27.054321  620795 cri.go:89] found id: ""
	I1213 12:06:27.054347  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.054357  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:27.054364  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:27.054423  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:27.084443  620795 cri.go:89] found id: ""
	I1213 12:06:27.084467  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.084476  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:27.084482  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:27.084542  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:27.110224  620795 cri.go:89] found id: ""
	I1213 12:06:27.110251  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.110260  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:27.110267  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:27.110326  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:27.141821  620795 cri.go:89] found id: ""
	I1213 12:06:27.141847  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.141857  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:27.141863  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:27.141953  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:27.168110  620795 cri.go:89] found id: ""
	I1213 12:06:27.168143  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.168153  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:27.168160  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:27.168228  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:27.193708  620795 cri.go:89] found id: ""
	I1213 12:06:27.193775  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.193791  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:27.193802  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:27.193862  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:27.220542  620795 cri.go:89] found id: ""
	I1213 12:06:27.220569  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.220578  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:27.220585  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:27.220673  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:27.248536  620795 cri.go:89] found id: ""
	I1213 12:06:27.248614  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.248630  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:27.248641  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:27.248653  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:27.314354  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:27.314389  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:27.331795  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:27.331824  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:27.397269  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:27.389020    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.389779    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.391484    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.391978    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.393471    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:27.389020    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.389779    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.391484    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.391978    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.393471    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:27.397290  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:27.397303  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:27.425995  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:27.426034  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:06:29.537185  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:32.037043  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:29.964336  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:29.975190  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:29.975264  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:30.020235  620795 cri.go:89] found id: ""
	I1213 12:06:30.020330  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.020353  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:30.020373  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:30.020492  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:30.064384  620795 cri.go:89] found id: ""
	I1213 12:06:30.064422  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.064431  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:30.064438  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:30.064537  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:30.093930  620795 cri.go:89] found id: ""
	I1213 12:06:30.093974  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.094003  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:30.094018  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:30.094092  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:30.121799  620795 cri.go:89] found id: ""
	I1213 12:06:30.121830  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.121846  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:30.121854  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:30.121994  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:30.150127  620795 cri.go:89] found id: ""
	I1213 12:06:30.150153  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.150163  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:30.150170  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:30.150232  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:30.177848  620795 cri.go:89] found id: ""
	I1213 12:06:30.177873  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.177883  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:30.177889  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:30.177948  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:30.204179  620795 cri.go:89] found id: ""
	I1213 12:06:30.204216  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.204225  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:30.204235  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:30.204295  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:30.230625  620795 cri.go:89] found id: ""
	I1213 12:06:30.230653  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.230663  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:30.230673  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:30.230685  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:30.297598  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:30.297634  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:30.314962  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:30.314993  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:30.380114  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:30.371745    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.372555    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.374185    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.374477    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.376001    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:30.371745    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.372555    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.374185    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.374477    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.376001    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:30.380136  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:30.380148  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:30.408485  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:30.408523  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:32.936773  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:32.947334  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:32.947408  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:32.974265  620795 cri.go:89] found id: ""
	I1213 12:06:32.974291  620795 logs.go:282] 0 containers: []
	W1213 12:06:32.974300  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:32.974307  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:32.974365  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:33.005585  620795 cri.go:89] found id: ""
	I1213 12:06:33.005616  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.005627  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:33.005633  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:33.005704  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:33.036036  620795 cri.go:89] found id: ""
	I1213 12:06:33.036058  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.036072  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:33.036079  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:33.036136  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:33.062415  620795 cri.go:89] found id: ""
	I1213 12:06:33.062439  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.062448  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:33.062455  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:33.062515  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:33.091004  620795 cri.go:89] found id: ""
	I1213 12:06:33.091072  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.091095  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:33.091115  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:33.091193  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:33.116964  620795 cri.go:89] found id: ""
	I1213 12:06:33.116989  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.116999  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:33.117005  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:33.117084  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:33.143886  620795 cri.go:89] found id: ""
	I1213 12:06:33.143908  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.143918  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:33.143924  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:33.143984  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:33.177672  620795 cri.go:89] found id: ""
	I1213 12:06:33.177697  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.177707  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:33.177716  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:33.177728  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:33.194235  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:33.194266  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:33.258679  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:33.250574    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.251172    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.252678    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.253209    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.254656    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:33.250574    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.251172    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.252678    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.253209    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.254656    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:33.258703  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:33.258715  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:33.287694  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:33.287731  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:33.319142  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:33.319168  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:06:34.037106  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:36.037218  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:35.883653  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:35.894470  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:35.894540  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:35.922164  620795 cri.go:89] found id: ""
	I1213 12:06:35.922243  620795 logs.go:282] 0 containers: []
	W1213 12:06:35.922268  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:35.922286  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:35.922378  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:35.948794  620795 cri.go:89] found id: ""
	I1213 12:06:35.948824  620795 logs.go:282] 0 containers: []
	W1213 12:06:35.948833  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:35.948840  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:35.948916  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:35.976985  620795 cri.go:89] found id: ""
	I1213 12:06:35.977012  620795 logs.go:282] 0 containers: []
	W1213 12:06:35.977023  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:35.977030  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:35.977097  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:36.008179  620795 cri.go:89] found id: ""
	I1213 12:06:36.008210  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.008221  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:36.008229  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:36.008306  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:36.037414  620795 cri.go:89] found id: ""
	I1213 12:06:36.037434  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.037442  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:36.037448  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:36.037505  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:36.066253  620795 cri.go:89] found id: ""
	I1213 12:06:36.066290  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.066304  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:36.066319  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:36.066394  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:36.093841  620795 cri.go:89] found id: ""
	I1213 12:06:36.093938  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.093955  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:36.093963  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:36.094042  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:36.119692  620795 cri.go:89] found id: ""
	I1213 12:06:36.119728  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.119737  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:36.119747  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:36.119761  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:36.136247  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:36.136322  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:36.202464  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:36.194729    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.195344    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.196865    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.197429    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.198995    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:36.194729    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.195344    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.196865    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.197429    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.198995    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:36.202486  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:36.202500  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:36.230571  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:36.230606  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:36.257928  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:36.257955  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:38.826068  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:38.841833  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:38.841915  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:38.871763  620795 cri.go:89] found id: ""
	I1213 12:06:38.871788  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.871797  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:38.871803  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:38.871870  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:38.897931  620795 cri.go:89] found id: ""
	I1213 12:06:38.897956  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.897966  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:38.897972  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:38.898064  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:38.928095  620795 cri.go:89] found id: ""
	I1213 12:06:38.928121  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.928131  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:38.928138  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:38.928202  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:38.954066  620795 cri.go:89] found id: ""
	I1213 12:06:38.954090  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.954098  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:38.954105  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:38.954168  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:38.978723  620795 cri.go:89] found id: ""
	I1213 12:06:38.978752  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.978762  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:38.978769  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:38.978825  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:39.006341  620795 cri.go:89] found id: ""
	I1213 12:06:39.006374  620795 logs.go:282] 0 containers: []
	W1213 12:06:39.006383  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:39.006390  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:39.006462  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:39.032585  620795 cri.go:89] found id: ""
	I1213 12:06:39.032612  620795 logs.go:282] 0 containers: []
	W1213 12:06:39.032622  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:39.032629  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:39.032699  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:39.061395  620795 cri.go:89] found id: ""
	I1213 12:06:39.061426  620795 logs.go:282] 0 containers: []
	W1213 12:06:39.061436  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:39.061446  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:39.061457  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:39.091343  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:39.091367  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:39.160940  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:39.160987  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:39.177451  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:39.177490  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:38.536279  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:40.537278  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:43.037128  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:39.246489  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:39.238660    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.239263    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.241330    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.241646    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.243151    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:39.238660    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.239263    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.241330    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.241646    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.243151    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:39.246510  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:39.246524  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:41.775639  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:41.794476  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:41.794600  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:41.831000  620795 cri.go:89] found id: ""
	I1213 12:06:41.831074  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.831102  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:41.831121  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:41.831203  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:41.872779  620795 cri.go:89] found id: ""
	I1213 12:06:41.872806  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.872816  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:41.872823  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:41.872903  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:41.902394  620795 cri.go:89] found id: ""
	I1213 12:06:41.902420  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.902429  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:41.902435  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:41.902494  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:41.929459  620795 cri.go:89] found id: ""
	I1213 12:06:41.929485  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.929494  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:41.929501  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:41.929563  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:41.955676  620795 cri.go:89] found id: ""
	I1213 12:06:41.955700  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.955716  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:41.955724  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:41.955783  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:41.981839  620795 cri.go:89] found id: ""
	I1213 12:06:41.981865  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.981875  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:41.981882  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:41.981939  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:42.021720  620795 cri.go:89] found id: ""
	I1213 12:06:42.021808  620795 logs.go:282] 0 containers: []
	W1213 12:06:42.021827  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:42.021836  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:42.021908  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:42.052304  620795 cri.go:89] found id: ""
	I1213 12:06:42.052332  620795 logs.go:282] 0 containers: []
	W1213 12:06:42.052341  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:42.052351  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:42.052382  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:42.071214  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:42.071250  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:42.151103  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:42.141536    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.142506    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.144362    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.144822    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.146635    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:42.141536    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.142506    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.144362    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.144822    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.146635    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:42.151127  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:42.151146  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:42.183473  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:42.183646  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:42.226797  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:42.226834  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:06:45.037308  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:47.537265  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:44.796943  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:44.821281  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:44.821413  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:44.863598  620795 cri.go:89] found id: ""
	I1213 12:06:44.863672  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.863697  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:44.863718  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:44.863805  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:44.892309  620795 cri.go:89] found id: ""
	I1213 12:06:44.892395  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.892418  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:44.892438  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:44.892552  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:44.918444  620795 cri.go:89] found id: ""
	I1213 12:06:44.918522  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.918557  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:44.918581  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:44.918673  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:44.944223  620795 cri.go:89] found id: ""
	I1213 12:06:44.944249  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.944258  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:44.944265  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:44.944327  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:44.970515  620795 cri.go:89] found id: ""
	I1213 12:06:44.970548  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.970559  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:44.970566  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:44.970626  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:44.996938  620795 cri.go:89] found id: ""
	I1213 12:06:44.996966  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.996976  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:44.996983  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:44.997050  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:45.050971  620795 cri.go:89] found id: ""
	I1213 12:06:45.051001  620795 logs.go:282] 0 containers: []
	W1213 12:06:45.051020  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:45.051028  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:45.051107  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:45.095037  620795 cri.go:89] found id: ""
	I1213 12:06:45.095076  620795 logs.go:282] 0 containers: []
	W1213 12:06:45.095087  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:45.095098  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:45.095116  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:45.209528  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:45.209618  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:45.240275  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:45.240311  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:45.322872  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:45.312425    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.313157    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.314727    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.315938    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.316890    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:45.312425    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.313157    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.314727    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.315938    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.316890    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:45.322895  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:45.322909  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:45.353126  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:45.353162  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:47.883672  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:47.894317  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:47.894394  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:47.920883  620795 cri.go:89] found id: ""
	I1213 12:06:47.920909  620795 logs.go:282] 0 containers: []
	W1213 12:06:47.920919  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:47.920927  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:47.920985  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:47.947168  620795 cri.go:89] found id: ""
	I1213 12:06:47.947197  620795 logs.go:282] 0 containers: []
	W1213 12:06:47.947207  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:47.947214  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:47.947279  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:47.972678  620795 cri.go:89] found id: ""
	I1213 12:06:47.972701  620795 logs.go:282] 0 containers: []
	W1213 12:06:47.972710  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:47.972717  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:47.972779  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:48.010849  620795 cri.go:89] found id: ""
	I1213 12:06:48.010915  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.010939  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:48.010961  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:48.011038  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:48.040005  620795 cri.go:89] found id: ""
	I1213 12:06:48.040074  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.040098  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:48.040118  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:48.040211  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:48.067778  620795 cri.go:89] found id: ""
	I1213 12:06:48.067806  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.067815  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:48.067822  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:48.067884  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:48.096165  620795 cri.go:89] found id: ""
	I1213 12:06:48.096207  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.096218  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:48.096224  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:48.096297  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:48.123725  620795 cri.go:89] found id: ""
	I1213 12:06:48.123761  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.123771  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:48.123781  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:48.123793  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:48.153693  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:48.153733  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:48.185148  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:48.185227  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:48.251689  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:48.251724  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:48.269048  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:48.269079  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:48.336435  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:48.328704    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.329312    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.330862    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.331331    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.332839    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:48.328704    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.329312    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.330862    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.331331    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.332839    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:06:50.037084  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:52.037310  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:50.836744  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:50.848522  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:50.848593  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:50.874981  620795 cri.go:89] found id: ""
	I1213 12:06:50.875065  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.875088  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:50.875108  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:50.875219  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:50.900176  620795 cri.go:89] found id: ""
	I1213 12:06:50.900203  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.900213  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:50.900219  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:50.900277  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:50.929844  620795 cri.go:89] found id: ""
	I1213 12:06:50.929869  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.929878  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:50.929885  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:50.929943  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:50.955008  620795 cri.go:89] found id: ""
	I1213 12:06:50.955033  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.955042  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:50.955049  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:50.955104  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:50.982109  620795 cri.go:89] found id: ""
	I1213 12:06:50.982134  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.982143  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:50.982149  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:50.982211  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:51.013066  620795 cri.go:89] found id: ""
	I1213 12:06:51.013144  620795 logs.go:282] 0 containers: []
	W1213 12:06:51.013160  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:51.013168  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:51.013236  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:51.042207  620795 cri.go:89] found id: ""
	I1213 12:06:51.042233  620795 logs.go:282] 0 containers: []
	W1213 12:06:51.042243  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:51.042250  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:51.042315  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:51.068089  620795 cri.go:89] found id: ""
	I1213 12:06:51.068116  620795 logs.go:282] 0 containers: []
	W1213 12:06:51.068125  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:51.068135  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:51.068146  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:51.136510  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:51.136550  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:51.153539  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:51.153567  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:51.227168  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:51.219231    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.219823    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.221668    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.222081    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.223742    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:51.219231    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.219823    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.221668    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.222081    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.223742    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:51.227240  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:51.227271  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:51.256505  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:51.256541  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:53.786599  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:53.808412  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:53.808498  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:53.866097  620795 cri.go:89] found id: ""
	I1213 12:06:53.866124  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.866133  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:53.866140  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:53.866197  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:53.896398  620795 cri.go:89] found id: ""
	I1213 12:06:53.896426  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.896435  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:53.896442  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:53.896499  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:53.922228  620795 cri.go:89] found id: ""
	I1213 12:06:53.922255  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.922265  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:53.922271  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:53.922333  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:53.947081  620795 cri.go:89] found id: ""
	I1213 12:06:53.947107  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.947116  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:53.947123  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:53.947177  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:53.972340  620795 cri.go:89] found id: ""
	I1213 12:06:53.972365  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.972374  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:53.972381  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:53.972437  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:54.000806  620795 cri.go:89] found id: ""
	I1213 12:06:54.000835  620795 logs.go:282] 0 containers: []
	W1213 12:06:54.000844  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:54.000851  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:54.000925  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:54.030584  620795 cri.go:89] found id: ""
	I1213 12:06:54.030617  620795 logs.go:282] 0 containers: []
	W1213 12:06:54.030626  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:54.030648  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:54.030734  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:54.056807  620795 cri.go:89] found id: ""
	I1213 12:06:54.056833  620795 logs.go:282] 0 containers: []
	W1213 12:06:54.056842  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:54.056877  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:54.056897  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:54.122299  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:54.122347  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:54.139911  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:54.139944  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:54.202433  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:54.194761    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.195486    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.197123    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.197444    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.198946    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:54.194761    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.195486    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.197123    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.197444    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.198946    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:54.202453  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:54.202466  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:54.230939  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:54.230977  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:06:54.536621  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:56.537197  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:56.761244  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:56.773199  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:56.773280  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:56.833295  620795 cri.go:89] found id: ""
	I1213 12:06:56.833323  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.833338  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:56.833345  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:56.833410  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:56.877141  620795 cri.go:89] found id: ""
	I1213 12:06:56.877179  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.877189  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:56.877195  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:56.877255  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:56.909304  620795 cri.go:89] found id: ""
	I1213 12:06:56.909329  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.909337  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:56.909344  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:56.909402  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:56.937175  620795 cri.go:89] found id: ""
	I1213 12:06:56.937206  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.937215  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:56.937222  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:56.937283  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:56.962816  620795 cri.go:89] found id: ""
	I1213 12:06:56.962839  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.962848  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:56.962854  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:56.962909  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:56.988340  620795 cri.go:89] found id: ""
	I1213 12:06:56.988364  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.988372  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:56.988379  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:56.988438  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:57.014873  620795 cri.go:89] found id: ""
	I1213 12:06:57.014956  620795 logs.go:282] 0 containers: []
	W1213 12:06:57.014979  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:57.014997  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:57.015107  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:57.042222  620795 cri.go:89] found id: ""
	I1213 12:06:57.042295  620795 logs.go:282] 0 containers: []
	W1213 12:06:57.042331  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:57.042357  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:57.042383  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:57.070110  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:57.070148  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:57.097788  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:57.097812  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:57.164029  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:57.164067  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:57.182586  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:57.182619  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:57.253568  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:57.245349    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.246144    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.247745    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.248303    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.249920    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:57.245349    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.246144    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.247745    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.248303    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.249920    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:06:59.037110  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:01.537092  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:59.753877  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:59.764872  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:59.764943  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:59.794978  620795 cri.go:89] found id: ""
	I1213 12:06:59.795002  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.795016  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:59.795027  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:59.795086  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:59.832235  620795 cri.go:89] found id: ""
	I1213 12:06:59.832264  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.832276  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:59.832283  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:59.832342  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:59.879189  620795 cri.go:89] found id: ""
	I1213 12:06:59.879217  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.879227  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:59.879233  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:59.879296  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:59.906738  620795 cri.go:89] found id: ""
	I1213 12:06:59.906766  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.906775  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:59.906782  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:59.906838  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:59.934746  620795 cri.go:89] found id: ""
	I1213 12:06:59.934774  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.934783  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:59.934790  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:59.934852  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:59.962016  620795 cri.go:89] found id: ""
	I1213 12:06:59.962049  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.962059  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:59.962066  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:59.962123  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:59.988024  620795 cri.go:89] found id: ""
	I1213 12:06:59.988047  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.988056  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:59.988062  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:59.988118  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:00.062022  620795 cri.go:89] found id: ""
	I1213 12:07:00.062049  620795 logs.go:282] 0 containers: []
	W1213 12:07:00.062059  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:00.062076  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:00.062094  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:00.179599  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:00.181365  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:00.211914  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:00.211958  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:00.303311  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:00.290980    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.291674    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.293924    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.295005    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.295928    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:00.290980    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.291674    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.293924    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.295005    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.295928    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:00.303333  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:00.303347  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:00.339996  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:00.340039  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:02.882696  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:02.898926  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:02.899000  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:02.928919  620795 cri.go:89] found id: ""
	I1213 12:07:02.928949  620795 logs.go:282] 0 containers: []
	W1213 12:07:02.928959  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:02.928967  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:02.929030  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:02.955168  620795 cri.go:89] found id: ""
	I1213 12:07:02.955194  620795 logs.go:282] 0 containers: []
	W1213 12:07:02.955209  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:02.955215  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:02.955273  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:02.984105  620795 cri.go:89] found id: ""
	I1213 12:07:02.984132  620795 logs.go:282] 0 containers: []
	W1213 12:07:02.984141  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:02.984159  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:02.984220  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:03.011185  620795 cri.go:89] found id: ""
	I1213 12:07:03.011210  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.011219  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:03.011227  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:03.011289  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:03.038557  620795 cri.go:89] found id: ""
	I1213 12:07:03.038580  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.038588  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:03.038594  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:03.038656  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:03.064610  620795 cri.go:89] found id: ""
	I1213 12:07:03.064650  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.064661  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:03.064667  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:03.064725  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:03.090406  620795 cri.go:89] found id: ""
	I1213 12:07:03.090432  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.090441  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:03.090447  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:03.090506  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:03.117733  620795 cri.go:89] found id: ""
	I1213 12:07:03.117761  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.117770  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:03.117780  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:03.117792  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:03.185975  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:03.177634    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.178390    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.180015    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.180554    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.182089    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:03.177634    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.178390    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.180015    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.180554    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.182089    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:03.185999  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:03.186011  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:03.214353  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:03.214387  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:03.244844  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:03.244873  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:03.310569  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:03.310608  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:07:04.037144  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:06.537015  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:05.828010  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:05.840499  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:05.840570  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:05.867194  620795 cri.go:89] found id: ""
	I1213 12:07:05.867272  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.867295  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:05.867314  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:05.867394  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:05.894013  620795 cri.go:89] found id: ""
	I1213 12:07:05.894044  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.894054  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:05.894061  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:05.894126  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:05.920207  620795 cri.go:89] found id: ""
	I1213 12:07:05.920234  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.920244  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:05.920250  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:05.920309  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:05.948255  620795 cri.go:89] found id: ""
	I1213 12:07:05.948280  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.948289  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:05.948295  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:05.948352  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:05.975137  620795 cri.go:89] found id: ""
	I1213 12:07:05.975162  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.975211  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:05.975222  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:05.975283  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:06.006992  620795 cri.go:89] found id: ""
	I1213 12:07:06.007020  620795 logs.go:282] 0 containers: []
	W1213 12:07:06.007030  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:06.007037  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:06.007106  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:06.035032  620795 cri.go:89] found id: ""
	I1213 12:07:06.035067  620795 logs.go:282] 0 containers: []
	W1213 12:07:06.035077  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:06.035084  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:06.035157  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:06.066833  620795 cri.go:89] found id: ""
	I1213 12:07:06.066865  620795 logs.go:282] 0 containers: []
	W1213 12:07:06.066875  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:06.066885  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:06.066899  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:06.134254  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:06.125473    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.125887    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.127536    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.128260    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.129881    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:06.125473    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.125887    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.127536    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.128260    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.129881    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:06.134284  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:06.134297  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:06.163816  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:06.163852  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:06.194055  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:06.194084  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:06.262450  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:06.262550  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:08.779798  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:08.793568  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:08.793654  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:08.848358  620795 cri.go:89] found id: ""
	I1213 12:07:08.848399  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.848408  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:08.848415  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:08.848485  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:08.881239  620795 cri.go:89] found id: ""
	I1213 12:07:08.881268  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.881278  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:08.881284  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:08.881358  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:08.912007  620795 cri.go:89] found id: ""
	I1213 12:07:08.912038  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.912059  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:08.912070  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:08.912143  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:08.948718  620795 cri.go:89] found id: ""
	I1213 12:07:08.948744  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.948754  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:08.948760  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:08.948815  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:08.974195  620795 cri.go:89] found id: ""
	I1213 12:07:08.974224  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.974234  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:08.974240  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:08.974298  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:09.000368  620795 cri.go:89] found id: ""
	I1213 12:07:09.000409  620795 logs.go:282] 0 containers: []
	W1213 12:07:09.000420  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:09.000428  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:09.000500  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:09.027504  620795 cri.go:89] found id: ""
	I1213 12:07:09.027539  620795 logs.go:282] 0 containers: []
	W1213 12:07:09.027548  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:09.027554  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:09.027611  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:09.052844  620795 cri.go:89] found id: ""
	I1213 12:07:09.052870  620795 logs.go:282] 0 containers: []
	W1213 12:07:09.052879  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:09.052888  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:09.052899  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:09.080443  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:09.080483  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:09.109721  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:09.109747  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:09.174545  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:09.174581  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:09.192943  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:09.192974  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:09.036994  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:11.537211  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:09.256162  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:09.248263    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.248774    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.250435    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.251054    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.252736    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:09.248263    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.248774    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.250435    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.251054    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.252736    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:11.756459  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:11.766714  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:11.766784  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:11.797701  620795 cri.go:89] found id: ""
	I1213 12:07:11.797728  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.797737  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:11.797753  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:11.797832  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:11.833489  620795 cri.go:89] found id: ""
	I1213 12:07:11.833563  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.833585  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:11.833604  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:11.833692  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:11.869283  620795 cri.go:89] found id: ""
	I1213 12:07:11.869305  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.869314  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:11.869320  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:11.869376  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:11.899820  620795 cri.go:89] found id: ""
	I1213 12:07:11.899845  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.899855  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:11.899862  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:11.899925  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:11.926125  620795 cri.go:89] found id: ""
	I1213 12:07:11.926150  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.926159  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:11.926166  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:11.926224  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:11.952049  620795 cri.go:89] found id: ""
	I1213 12:07:11.952131  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.952165  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:11.952178  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:11.952250  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:11.982382  620795 cri.go:89] found id: ""
	I1213 12:07:11.982407  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.982415  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:11.982421  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:11.982494  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:12.014887  620795 cri.go:89] found id: ""
	I1213 12:07:12.014912  620795 logs.go:282] 0 containers: []
	W1213 12:07:12.014921  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:12.014931  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:12.014943  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:12.080370  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:12.080407  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:12.097493  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:12.097534  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:12.163658  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:12.155544    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.156277    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.157926    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.158224    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.159755    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:12.155544    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.156277    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.157926    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.158224    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.159755    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:12.163680  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:12.163692  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:12.192505  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:12.192544  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:07:14.037223  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:16.537169  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:14.721085  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:14.731999  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:14.732070  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:14.758997  620795 cri.go:89] found id: ""
	I1213 12:07:14.759023  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.759032  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:14.759039  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:14.759098  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:14.831264  620795 cri.go:89] found id: ""
	I1213 12:07:14.831294  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.831303  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:14.831310  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:14.831366  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:14.882934  620795 cri.go:89] found id: ""
	I1213 12:07:14.882964  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.882973  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:14.882980  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:14.883040  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:14.916858  620795 cri.go:89] found id: ""
	I1213 12:07:14.916888  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.916898  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:14.916905  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:14.916969  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:14.942297  620795 cri.go:89] found id: ""
	I1213 12:07:14.942334  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.942343  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:14.942355  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:14.942431  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:14.967905  620795 cri.go:89] found id: ""
	I1213 12:07:14.967927  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.967936  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:14.967942  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:14.968000  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:14.993041  620795 cri.go:89] found id: ""
	I1213 12:07:14.993107  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.993131  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:14.993145  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:14.993224  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:15.027730  620795 cri.go:89] found id: ""
	I1213 12:07:15.027755  620795 logs.go:282] 0 containers: []
	W1213 12:07:15.027765  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:15.027776  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:15.027789  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:15.095470  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:15.095507  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:15.113485  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:15.113567  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:15.183456  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:15.174486    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.175343    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.177179    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.177821    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.179398    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:15.174486    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.175343    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.177179    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.177821    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.179398    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:15.183481  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:15.183497  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:15.212670  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:15.212706  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:17.745028  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:17.755868  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:17.755965  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:17.830528  620795 cri.go:89] found id: ""
	I1213 12:07:17.830551  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.830559  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:17.830585  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:17.830654  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:17.866003  620795 cri.go:89] found id: ""
	I1213 12:07:17.866029  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.866038  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:17.866044  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:17.866102  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:17.891564  620795 cri.go:89] found id: ""
	I1213 12:07:17.891588  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.891597  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:17.891603  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:17.891664  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:17.918740  620795 cri.go:89] found id: ""
	I1213 12:07:17.918768  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.918776  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:17.918783  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:17.918845  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:17.950736  620795 cri.go:89] found id: ""
	I1213 12:07:17.950774  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.950784  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:17.950790  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:17.950854  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:17.976775  620795 cri.go:89] found id: ""
	I1213 12:07:17.976799  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.976809  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:17.976816  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:17.976883  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:18.008430  620795 cri.go:89] found id: ""
	I1213 12:07:18.008460  620795 logs.go:282] 0 containers: []
	W1213 12:07:18.008469  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:18.008477  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:18.008564  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:18.037446  620795 cri.go:89] found id: ""
	I1213 12:07:18.037477  620795 logs.go:282] 0 containers: []
	W1213 12:07:18.037488  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:18.037502  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:18.037517  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:18.068414  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:18.068443  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:18.138588  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:18.138627  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:18.155698  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:18.155729  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:18.222792  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:18.215479    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.215981    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.217571    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.217896    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.219409    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:18.215479    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.215981    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.217571    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.217896    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.219409    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:18.222835  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:18.222847  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:07:19.037064  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:21.536199  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:20.751476  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:20.762121  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:20.762190  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:20.818771  620795 cri.go:89] found id: ""
	I1213 12:07:20.818794  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.818803  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:20.818810  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:20.818877  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:20.873533  620795 cri.go:89] found id: ""
	I1213 12:07:20.873556  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.873564  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:20.873581  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:20.873639  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:20.900689  620795 cri.go:89] found id: ""
	I1213 12:07:20.900716  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.900725  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:20.900732  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:20.900790  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:20.926298  620795 cri.go:89] found id: ""
	I1213 12:07:20.926324  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.926334  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:20.926340  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:20.926400  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:20.955692  620795 cri.go:89] found id: ""
	I1213 12:07:20.955767  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.955789  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:20.955808  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:20.955904  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:20.981101  620795 cri.go:89] found id: ""
	I1213 12:07:20.981126  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.981135  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:20.981146  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:20.981208  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:21.012906  620795 cri.go:89] found id: ""
	I1213 12:07:21.012933  620795 logs.go:282] 0 containers: []
	W1213 12:07:21.012942  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:21.012949  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:21.013024  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:21.043717  620795 cri.go:89] found id: ""
	I1213 12:07:21.043743  620795 logs.go:282] 0 containers: []
	W1213 12:07:21.043753  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:21.043764  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:21.043776  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:21.116319  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:21.116368  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:21.133173  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:21.133204  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:21.201103  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:21.193228    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.194101    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.195701    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.196170    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.197510    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:21.193228    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.194101    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.195701    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.196170    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.197510    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:21.201127  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:21.201140  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:21.229422  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:21.229457  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:23.763349  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:23.781088  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:23.781159  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:23.857623  620795 cri.go:89] found id: ""
	I1213 12:07:23.857648  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.857666  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:23.857673  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:23.857736  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:23.882807  620795 cri.go:89] found id: ""
	I1213 12:07:23.882833  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.882842  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:23.882849  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:23.882907  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:23.908402  620795 cri.go:89] found id: ""
	I1213 12:07:23.908430  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.908440  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:23.908447  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:23.908506  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:23.933800  620795 cri.go:89] found id: ""
	I1213 12:07:23.933826  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.933835  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:23.933841  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:23.933919  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:23.959222  620795 cri.go:89] found id: ""
	I1213 12:07:23.959248  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.959259  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:23.959266  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:23.959352  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:23.985470  620795 cri.go:89] found id: ""
	I1213 12:07:23.985496  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.985505  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:23.985512  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:23.985570  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:24.014442  620795 cri.go:89] found id: ""
	I1213 12:07:24.014477  620795 logs.go:282] 0 containers: []
	W1213 12:07:24.014487  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:24.014494  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:24.014556  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:24.043282  620795 cri.go:89] found id: ""
	I1213 12:07:24.043308  620795 logs.go:282] 0 containers: []
	W1213 12:07:24.043318  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:24.043328  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:24.043340  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:24.075046  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:24.075073  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:24.143658  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:24.143701  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:24.160736  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:24.160765  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:24.224652  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:24.215949    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.216643    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.218385    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.218972    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.220693    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:24.215949    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.216643    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.218385    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.218972    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.220693    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:24.224675  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:24.224692  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:07:23.536309  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:25.537129  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:28.037200  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:26.754848  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:26.765356  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:26.765429  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:26.818982  620795 cri.go:89] found id: ""
	I1213 12:07:26.819005  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.819013  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:26.819020  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:26.819078  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:26.871231  620795 cri.go:89] found id: ""
	I1213 12:07:26.871253  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.871262  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:26.871268  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:26.871326  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:26.898363  620795 cri.go:89] found id: ""
	I1213 12:07:26.898443  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.898467  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:26.898486  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:26.898578  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:26.923840  620795 cri.go:89] found id: ""
	I1213 12:07:26.923866  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.923875  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:26.923882  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:26.923940  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:26.952921  620795 cri.go:89] found id: ""
	I1213 12:07:26.952950  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.952960  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:26.952967  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:26.953028  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:26.984162  620795 cri.go:89] found id: ""
	I1213 12:07:26.984188  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.984197  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:26.984203  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:26.984282  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:27.022329  620795 cri.go:89] found id: ""
	I1213 12:07:27.022397  620795 logs.go:282] 0 containers: []
	W1213 12:07:27.022413  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:27.022420  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:27.022479  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:27.048366  620795 cri.go:89] found id: ""
	I1213 12:07:27.048391  620795 logs.go:282] 0 containers: []
	W1213 12:07:27.048401  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:27.048410  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:27.048423  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:27.076996  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:27.077029  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:27.149458  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:27.149509  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:27.167444  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:27.167473  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:27.235232  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:27.227331    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.227820    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.229697    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.230220    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.231699    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:27.227331    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.227820    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.229697    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.230220    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.231699    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:27.235258  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:27.235270  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:07:30.537006  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:33.036221  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:29.764538  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:29.791446  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:29.791560  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:29.844876  620795 cri.go:89] found id: ""
	I1213 12:07:29.844953  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.844976  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:29.844996  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:29.845082  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:29.884357  620795 cri.go:89] found id: ""
	I1213 12:07:29.884423  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.884441  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:29.884449  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:29.884508  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:29.914712  620795 cri.go:89] found id: ""
	I1213 12:07:29.914738  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.914748  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:29.914755  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:29.914813  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:29.940420  620795 cri.go:89] found id: ""
	I1213 12:07:29.940500  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.940516  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:29.940524  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:29.940585  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:29.970378  620795 cri.go:89] found id: ""
	I1213 12:07:29.970404  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.970413  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:29.970420  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:29.970478  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:29.996803  620795 cri.go:89] found id: ""
	I1213 12:07:29.996881  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.996898  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:29.996907  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:29.996983  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:30.040874  620795 cri.go:89] found id: ""
	I1213 12:07:30.040904  620795 logs.go:282] 0 containers: []
	W1213 12:07:30.040913  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:30.040920  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:30.040995  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:30.083632  620795 cri.go:89] found id: ""
	I1213 12:07:30.083658  620795 logs.go:282] 0 containers: []
	W1213 12:07:30.083667  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:30.083676  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:30.083689  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:30.149516  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:30.149553  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:30.167731  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:30.167816  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:30.233503  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:30.225039   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.225442   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.227057   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.227805   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.229579   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:30.225039   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.225442   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.227057   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.227805   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.229579   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:30.233567  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:30.233586  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:30.263464  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:30.263497  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:32.796303  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:32.813180  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:32.813263  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:32.849335  620795 cri.go:89] found id: ""
	I1213 12:07:32.849413  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.849456  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:32.849481  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:32.849570  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:32.880068  620795 cri.go:89] found id: ""
	I1213 12:07:32.880092  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.880101  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:32.880107  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:32.880165  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:32.907166  620795 cri.go:89] found id: ""
	I1213 12:07:32.907193  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.907202  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:32.907209  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:32.907266  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:32.933296  620795 cri.go:89] found id: ""
	I1213 12:07:32.933366  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.933388  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:32.933407  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:32.933500  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:32.959040  620795 cri.go:89] found id: ""
	I1213 12:07:32.959106  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.959130  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:32.959149  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:32.959233  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:32.989508  620795 cri.go:89] found id: ""
	I1213 12:07:32.989531  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.989540  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:32.989546  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:32.989629  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:33.018978  620795 cri.go:89] found id: ""
	I1213 12:07:33.019002  620795 logs.go:282] 0 containers: []
	W1213 12:07:33.019010  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:33.019017  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:33.019098  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:33.046327  620795 cri.go:89] found id: ""
	I1213 12:07:33.046359  620795 logs.go:282] 0 containers: []
	W1213 12:07:33.046368  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:33.046378  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:33.046419  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:33.075176  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:33.075213  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:33.107277  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:33.107309  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:33.174349  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:33.174384  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:33.192737  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:33.192770  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:33.259992  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:33.251960   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.252364   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.253955   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.254311   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.255985   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:33.251960   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.252364   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.253955   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.254311   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.255985   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:07:35.037005  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:37.037071  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:35.760267  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:35.771899  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:35.771965  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:35.816451  620795 cri.go:89] found id: ""
	I1213 12:07:35.816499  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.816508  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:35.816519  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:35.816576  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:35.874010  620795 cri.go:89] found id: ""
	I1213 12:07:35.874031  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.874040  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:35.874046  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:35.874109  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:35.901470  620795 cri.go:89] found id: ""
	I1213 12:07:35.901499  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.901509  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:35.901515  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:35.901577  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:35.929967  620795 cri.go:89] found id: ""
	I1213 12:07:35.929988  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.929997  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:35.930004  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:35.930061  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:35.959220  620795 cri.go:89] found id: ""
	I1213 12:07:35.959245  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.959255  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:35.959262  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:35.959323  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:35.988889  620795 cri.go:89] found id: ""
	I1213 12:07:35.988916  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.988925  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:35.988932  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:35.988990  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:36.017868  620795 cri.go:89] found id: ""
	I1213 12:07:36.017896  620795 logs.go:282] 0 containers: []
	W1213 12:07:36.017906  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:36.017912  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:36.017975  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:36.046482  620795 cri.go:89] found id: ""
	I1213 12:07:36.046508  620795 logs.go:282] 0 containers: []
	W1213 12:07:36.046517  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:36.046527  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:36.046539  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:36.063480  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:36.063675  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:36.134374  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:36.125215   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.125817   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.127378   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.127950   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.129158   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:36.125215   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.125817   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.127378   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.127950   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.129158   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:36.134437  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:36.134465  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:36.164786  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:36.164831  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:36.195048  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:36.195077  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:38.762384  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:38.773774  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:38.773860  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:38.823096  620795 cri.go:89] found id: ""
	I1213 12:07:38.823118  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.823127  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:38.823133  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:38.823192  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:38.859735  620795 cri.go:89] found id: ""
	I1213 12:07:38.859758  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.859766  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:38.859773  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:38.859832  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:38.888780  620795 cri.go:89] found id: ""
	I1213 12:07:38.888806  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.888815  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:38.888821  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:38.888885  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:38.918480  620795 cri.go:89] found id: ""
	I1213 12:07:38.918506  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.918516  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:38.918522  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:38.918579  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:38.944442  620795 cri.go:89] found id: ""
	I1213 12:07:38.944475  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.944485  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:38.944492  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:38.944548  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:38.972111  620795 cri.go:89] found id: ""
	I1213 12:07:38.972138  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.972148  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:38.972156  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:38.972217  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:38.999220  620795 cri.go:89] found id: ""
	I1213 12:07:38.999249  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.999259  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:38.999266  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:38.999387  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:39.027462  620795 cri.go:89] found id: ""
	I1213 12:07:39.027489  620795 logs.go:282] 0 containers: []
	W1213 12:07:39.027498  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:39.027508  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:39.027551  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:39.045387  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:39.045421  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:39.113555  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:39.104411   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.105461   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.106402   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.108045   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.108696   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:39.104411   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.105461   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.106402   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.108045   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.108696   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:39.113577  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:39.113591  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:39.141868  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:39.141905  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:39.170660  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:39.170687  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:07:39.536473  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:41.536533  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:41.738914  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:41.749712  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:41.749788  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:41.815733  620795 cri.go:89] found id: ""
	I1213 12:07:41.815757  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.815767  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:41.815774  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:41.815837  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:41.853772  620795 cri.go:89] found id: ""
	I1213 12:07:41.853794  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.853802  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:41.853808  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:41.853864  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:41.880989  620795 cri.go:89] found id: ""
	I1213 12:07:41.881012  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.881021  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:41.881027  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:41.881085  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:41.910432  620795 cri.go:89] found id: ""
	I1213 12:07:41.910455  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.910464  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:41.910470  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:41.910525  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:41.938539  620795 cri.go:89] found id: ""
	I1213 12:07:41.938561  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.938570  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:41.938576  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:41.938636  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:41.964574  620795 cri.go:89] found id: ""
	I1213 12:07:41.964608  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.964617  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:41.964624  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:41.964681  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:41.989355  620795 cri.go:89] found id: ""
	I1213 12:07:41.989380  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.989389  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:41.989396  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:41.989456  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:42.019802  620795 cri.go:89] found id: ""
	I1213 12:07:42.019830  620795 logs.go:282] 0 containers: []
	W1213 12:07:42.019839  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:42.019849  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:42.019861  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:42.052058  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:42.052087  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:42.123300  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:42.123360  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:42.144729  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:42.144768  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:42.227868  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:42.217286   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.218234   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.220463   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.221227   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.223007   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:42.217286   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.218234   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.220463   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.221227   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.223007   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:42.227896  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:42.227910  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:07:44.037002  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:46.037183  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:44.760193  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:44.770916  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:44.770989  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:44.803100  620795 cri.go:89] found id: ""
	I1213 12:07:44.803124  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.803133  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:44.803140  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:44.803195  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:44.851212  620795 cri.go:89] found id: ""
	I1213 12:07:44.851235  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.851244  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:44.851250  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:44.851307  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:44.902052  620795 cri.go:89] found id: ""
	I1213 12:07:44.902075  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.902084  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:44.902090  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:44.902150  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:44.933898  620795 cri.go:89] found id: ""
	I1213 12:07:44.933926  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.933935  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:44.933942  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:44.934026  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:44.963132  620795 cri.go:89] found id: ""
	I1213 12:07:44.963158  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.963167  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:44.963174  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:44.963261  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:44.988132  620795 cri.go:89] found id: ""
	I1213 12:07:44.988163  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.988174  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:44.988181  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:44.988238  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:45.046906  620795 cri.go:89] found id: ""
	I1213 12:07:45.046934  620795 logs.go:282] 0 containers: []
	W1213 12:07:45.046943  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:45.046951  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:45.047019  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:45.080632  620795 cri.go:89] found id: ""
	I1213 12:07:45.080730  620795 logs.go:282] 0 containers: []
	W1213 12:07:45.080752  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:45.080792  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:45.080810  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:45.157685  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:45.157797  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:45.212507  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:45.212574  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:45.292666  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:45.284764   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.285529   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.287091   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.287398   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.288940   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:45.284764   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.285529   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.287091   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.287398   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.288940   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:45.292707  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:45.292720  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:45.321658  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:45.321690  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:47.858977  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:47.870353  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:47.870425  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:47.902849  620795 cri.go:89] found id: ""
	I1213 12:07:47.902874  620795 logs.go:282] 0 containers: []
	W1213 12:07:47.902883  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:47.902890  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:47.902958  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:47.928841  620795 cri.go:89] found id: ""
	I1213 12:07:47.928866  620795 logs.go:282] 0 containers: []
	W1213 12:07:47.928875  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:47.928882  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:47.928943  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:47.954469  620795 cri.go:89] found id: ""
	I1213 12:07:47.954494  620795 logs.go:282] 0 containers: []
	W1213 12:07:47.954503  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:47.954510  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:47.954571  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:47.984225  620795 cri.go:89] found id: ""
	I1213 12:07:47.984248  620795 logs.go:282] 0 containers: []
	W1213 12:07:47.984257  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:47.984263  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:47.984327  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:48.013666  620795 cri.go:89] found id: ""
	I1213 12:07:48.013694  620795 logs.go:282] 0 containers: []
	W1213 12:07:48.013704  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:48.013710  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:48.013776  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:48.043313  620795 cri.go:89] found id: ""
	I1213 12:07:48.043341  620795 logs.go:282] 0 containers: []
	W1213 12:07:48.043351  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:48.043358  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:48.043445  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:48.070641  620795 cri.go:89] found id: ""
	I1213 12:07:48.070669  620795 logs.go:282] 0 containers: []
	W1213 12:07:48.070680  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:48.070687  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:48.070767  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:48.096729  620795 cri.go:89] found id: ""
	I1213 12:07:48.096754  620795 logs.go:282] 0 containers: []
	W1213 12:07:48.096764  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:48.096773  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:48.096785  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:48.129289  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:48.129318  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:48.196743  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:48.196781  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:48.213775  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:48.213802  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:48.282000  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:48.273477   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.274412   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.276291   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.276931   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.278357   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:48.273477   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.274412   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.276291   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.276931   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.278357   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:48.282076  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:48.282104  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:07:48.537001  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:50.537083  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:53.037078  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:50.813946  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:50.834838  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:50.834928  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:50.871307  620795 cri.go:89] found id: ""
	I1213 12:07:50.871329  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.871337  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:50.871343  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:50.871400  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:50.900887  620795 cri.go:89] found id: ""
	I1213 12:07:50.900913  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.900922  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:50.900929  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:50.900987  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:50.926497  620795 cri.go:89] found id: ""
	I1213 12:07:50.926569  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.926606  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:50.926631  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:50.926721  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:50.954230  620795 cri.go:89] found id: ""
	I1213 12:07:50.954256  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.954266  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:50.954273  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:50.954331  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:50.980389  620795 cri.go:89] found id: ""
	I1213 12:07:50.980414  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.980425  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:50.980431  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:50.980490  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:51.007396  620795 cri.go:89] found id: ""
	I1213 12:07:51.007423  620795 logs.go:282] 0 containers: []
	W1213 12:07:51.007433  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:51.007444  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:51.007507  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:51.038515  620795 cri.go:89] found id: ""
	I1213 12:07:51.038540  620795 logs.go:282] 0 containers: []
	W1213 12:07:51.038550  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:51.038556  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:51.038611  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:51.066063  620795 cri.go:89] found id: ""
	I1213 12:07:51.066088  620795 logs.go:282] 0 containers: []
	W1213 12:07:51.066096  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:51.066111  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:51.066122  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:51.131363  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:51.131402  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:51.148223  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:51.148253  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:51.211768  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:51.204250   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.204888   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.206374   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.206860   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.208288   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:51.204250   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.204888   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.206374   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.206860   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.208288   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:51.211791  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:51.211807  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:51.239792  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:51.239825  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:53.772909  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:53.794190  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:53.794255  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:53.863195  620795 cri.go:89] found id: ""
	I1213 12:07:53.863228  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.863239  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:53.863246  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:53.863323  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:53.894744  620795 cri.go:89] found id: ""
	I1213 12:07:53.894812  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.894836  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:53.894855  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:53.894941  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:53.922176  620795 cri.go:89] found id: ""
	I1213 12:07:53.922244  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.922266  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:53.922284  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:53.922371  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:53.948409  620795 cri.go:89] found id: ""
	I1213 12:07:53.948437  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.948446  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:53.948453  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:53.948512  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:53.974142  620795 cri.go:89] found id: ""
	I1213 12:07:53.974222  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.974244  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:53.974263  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:53.974369  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:54.002307  620795 cri.go:89] found id: ""
	I1213 12:07:54.002343  620795 logs.go:282] 0 containers: []
	W1213 12:07:54.002353  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:54.002361  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:54.002440  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:54.030334  620795 cri.go:89] found id: ""
	I1213 12:07:54.030413  620795 logs.go:282] 0 containers: []
	W1213 12:07:54.030438  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:54.030457  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:54.030566  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:54.056614  620795 cri.go:89] found id: ""
	I1213 12:07:54.056697  620795 logs.go:282] 0 containers: []
	W1213 12:07:54.056713  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:54.056724  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:54.056737  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:54.124215  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:54.124253  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:54.141024  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:54.141052  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:54.203423  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:54.195491   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.196247   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.197856   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.198486   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.200023   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:54.195491   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.196247   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.197856   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.198486   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.200023   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:54.203445  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:54.203457  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:54.231323  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:54.231355  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:07:55.037200  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:57.537019  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:56.762827  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:56.786084  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:56.786208  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:56.855486  620795 cri.go:89] found id: ""
	I1213 12:07:56.855531  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.855542  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:56.855549  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:56.855615  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:56.883436  620795 cri.go:89] found id: ""
	I1213 12:07:56.883531  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.883557  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:56.883587  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:56.883648  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:56.908626  620795 cri.go:89] found id: ""
	I1213 12:07:56.908708  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.908739  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:56.908752  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:56.908821  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:56.935174  620795 cri.go:89] found id: ""
	I1213 12:07:56.935201  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.935210  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:56.935217  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:56.935302  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:56.964101  620795 cri.go:89] found id: ""
	I1213 12:07:56.964128  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.964139  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:56.964146  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:56.964232  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:56.989991  620795 cri.go:89] found id: ""
	I1213 12:07:56.990016  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.990025  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:56.990032  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:56.990117  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:57.021908  620795 cri.go:89] found id: ""
	I1213 12:07:57.021934  620795 logs.go:282] 0 containers: []
	W1213 12:07:57.021944  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:57.021952  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:57.022015  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:57.050893  620795 cri.go:89] found id: ""
	I1213 12:07:57.050919  620795 logs.go:282] 0 containers: []
	W1213 12:07:57.050929  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:57.050939  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:57.050958  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:57.114649  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:57.107304   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.107896   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.109344   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.109787   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.111210   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:57.107304   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.107896   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.109344   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.109787   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.111210   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:57.114709  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:57.114743  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:57.142743  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:57.142778  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:57.171088  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:57.171120  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:57.236905  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:57.236948  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:08:00.039297  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:02.536522  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:59.754255  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:59.764877  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:59.764948  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:59.800655  620795 cri.go:89] found id: ""
	I1213 12:07:59.800682  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.800691  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:59.800698  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:59.800757  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:59.844261  620795 cri.go:89] found id: ""
	I1213 12:07:59.844289  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.844299  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:59.844305  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:59.844363  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:59.890278  620795 cri.go:89] found id: ""
	I1213 12:07:59.890303  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.890313  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:59.890319  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:59.890379  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:59.918606  620795 cri.go:89] found id: ""
	I1213 12:07:59.918632  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.918641  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:59.918647  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:59.918703  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:59.947895  620795 cri.go:89] found id: ""
	I1213 12:07:59.947918  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.947928  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:59.947934  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:59.947993  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:59.973045  620795 cri.go:89] found id: ""
	I1213 12:07:59.973073  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.973082  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:59.973089  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:59.973163  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:00.009231  620795 cri.go:89] found id: ""
	I1213 12:08:00.009320  620795 logs.go:282] 0 containers: []
	W1213 12:08:00.009353  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:00.009374  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:00.009507  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:00.119476  620795 cri.go:89] found id: ""
	I1213 12:08:00.119618  620795 logs.go:282] 0 containers: []
	W1213 12:08:00.119644  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:00.119687  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:00.119721  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:00.145226  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:00.145450  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:00.282893  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:00.266048   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.266988   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.274032   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.274509   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.276639   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:00.266048   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.266988   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.274032   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.274509   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.276639   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:00.282923  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:00.282944  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:00.371336  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:00.371439  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:00.430461  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:00.430503  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:03.002113  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:03.014603  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:03.014679  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:03.042673  620795 cri.go:89] found id: ""
	I1213 12:08:03.042701  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.042711  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:03.042718  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:03.042778  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:03.074056  620795 cri.go:89] found id: ""
	I1213 12:08:03.074133  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.074164  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:03.074185  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:03.074301  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:03.101450  620795 cri.go:89] found id: ""
	I1213 12:08:03.101485  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.101495  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:03.101502  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:03.101564  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:03.132013  620795 cri.go:89] found id: ""
	I1213 12:08:03.132042  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.132053  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:03.132060  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:03.132123  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:03.158035  620795 cri.go:89] found id: ""
	I1213 12:08:03.158057  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.158067  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:03.158074  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:03.158131  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:03.183772  620795 cri.go:89] found id: ""
	I1213 12:08:03.183800  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.183809  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:03.183816  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:03.183879  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:03.209685  620795 cri.go:89] found id: ""
	I1213 12:08:03.209710  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.209718  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:03.209725  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:03.209809  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:03.238718  620795 cri.go:89] found id: ""
	I1213 12:08:03.238742  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.238751  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:03.238760  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:03.238771  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:03.266176  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:03.266211  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:03.295327  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:03.295357  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:03.371751  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:03.371796  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:03.388535  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:03.388569  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:03.455075  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:03.446801   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.447400   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.448900   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.449492   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.451125   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:03.446801   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.447400   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.448900   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.449492   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.451125   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:08:05.037001  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:07.037153  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:05.956468  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:05.967247  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:05.967349  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:05.992470  620795 cri.go:89] found id: ""
	I1213 12:08:05.992495  620795 logs.go:282] 0 containers: []
	W1213 12:08:05.992504  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:05.992510  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:05.992576  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:06.025309  620795 cri.go:89] found id: ""
	I1213 12:08:06.025339  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.025349  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:06.025356  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:06.025417  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:06.056164  620795 cri.go:89] found id: ""
	I1213 12:08:06.056192  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.056202  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:06.056208  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:06.056268  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:06.091020  620795 cri.go:89] found id: ""
	I1213 12:08:06.091047  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.091057  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:06.091063  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:06.091124  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:06.117741  620795 cri.go:89] found id: ""
	I1213 12:08:06.117767  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.117776  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:06.117792  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:06.117850  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:06.143430  620795 cri.go:89] found id: ""
	I1213 12:08:06.143454  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.143465  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:06.143472  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:06.143558  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:06.169857  620795 cri.go:89] found id: ""
	I1213 12:08:06.169883  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.169892  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:06.169899  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:06.169959  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:06.196298  620795 cri.go:89] found id: ""
	I1213 12:08:06.196325  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.196335  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:06.196344  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:06.196385  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:06.212572  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:06.212599  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:06.278450  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:06.270268   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.270834   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.272354   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.273016   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.274527   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:06.270268   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.270834   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.272354   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.273016   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.274527   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:06.278473  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:06.278485  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:06.306640  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:06.306679  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:06.336266  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:06.336295  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:08.901791  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:08.912829  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:08.912897  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:08.942435  620795 cri.go:89] found id: ""
	I1213 12:08:08.942467  620795 logs.go:282] 0 containers: []
	W1213 12:08:08.942476  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:08.942483  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:08.942552  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:08.968397  620795 cri.go:89] found id: ""
	I1213 12:08:08.968475  620795 logs.go:282] 0 containers: []
	W1213 12:08:08.968508  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:08.968533  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:08.968615  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:08.995667  620795 cri.go:89] found id: ""
	I1213 12:08:08.995734  620795 logs.go:282] 0 containers: []
	W1213 12:08:08.995757  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:08.995776  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:08.995851  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:09.026748  620795 cri.go:89] found id: ""
	I1213 12:08:09.026827  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.026859  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:09.026878  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:09.026961  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:09.052881  620795 cri.go:89] found id: ""
	I1213 12:08:09.052910  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.052919  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:09.052926  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:09.053016  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:09.079635  620795 cri.go:89] found id: ""
	I1213 12:08:09.079663  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.079673  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:09.079679  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:09.079740  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:09.106465  620795 cri.go:89] found id: ""
	I1213 12:08:09.106499  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.106507  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:09.106529  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:09.106610  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:09.132296  620795 cri.go:89] found id: ""
	I1213 12:08:09.132373  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.132389  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:09.132400  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:09.132411  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:09.198891  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:09.198937  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:09.215689  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:09.215718  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:09.536381  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:11.536495  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:09.283376  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:09.275383   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.276074   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.277779   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.278245   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.279888   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:09.275383   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.276074   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.277779   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.278245   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.279888   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:09.283399  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:09.283412  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:09.311953  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:09.311995  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:11.844673  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:11.854957  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:11.855031  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:11.884334  620795 cri.go:89] found id: ""
	I1213 12:08:11.884361  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.884370  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:11.884377  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:11.884438  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:11.911693  620795 cri.go:89] found id: ""
	I1213 12:08:11.911715  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.911724  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:11.911730  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:11.911785  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:11.939653  620795 cri.go:89] found id: ""
	I1213 12:08:11.939679  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.939688  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:11.939694  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:11.939753  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:11.965596  620795 cri.go:89] found id: ""
	I1213 12:08:11.965622  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.965631  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:11.965639  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:11.965695  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:11.994822  620795 cri.go:89] found id: ""
	I1213 12:08:11.994848  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.994857  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:11.994863  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:11.994921  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:12.027085  620795 cri.go:89] found id: ""
	I1213 12:08:12.027111  620795 logs.go:282] 0 containers: []
	W1213 12:08:12.027119  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:12.027127  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:12.027189  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:12.060592  620795 cri.go:89] found id: ""
	I1213 12:08:12.060621  620795 logs.go:282] 0 containers: []
	W1213 12:08:12.060631  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:12.060637  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:12.060695  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:12.087001  620795 cri.go:89] found id: ""
	I1213 12:08:12.087026  620795 logs.go:282] 0 containers: []
	W1213 12:08:12.087035  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:12.087046  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:12.087057  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:12.154968  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:12.155007  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:12.173266  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:12.173296  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:12.238320  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:12.230047   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.230756   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.232467   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.233052   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.234716   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:12.230047   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.230756   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.232467   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.233052   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.234716   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:12.238342  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:12.238353  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:12.266852  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:12.266886  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:08:14.037082  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:16.537099  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:14.799502  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:14.811316  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:14.811495  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:14.868310  620795 cri.go:89] found id: ""
	I1213 12:08:14.868404  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.868430  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:14.868485  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:14.868662  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:14.910677  620795 cri.go:89] found id: ""
	I1213 12:08:14.910744  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.910766  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:14.910785  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:14.910872  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:14.939727  620795 cri.go:89] found id: ""
	I1213 12:08:14.939767  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.939777  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:14.939783  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:14.939849  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:14.966035  620795 cri.go:89] found id: ""
	I1213 12:08:14.966069  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.966078  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:14.966086  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:14.966160  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:14.994530  620795 cri.go:89] found id: ""
	I1213 12:08:14.994596  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.994619  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:14.994641  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:14.994727  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:15.032176  620795 cri.go:89] found id: ""
	I1213 12:08:15.032213  620795 logs.go:282] 0 containers: []
	W1213 12:08:15.032223  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:15.032230  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:15.032294  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:15.063866  620795 cri.go:89] found id: ""
	I1213 12:08:15.063900  620795 logs.go:282] 0 containers: []
	W1213 12:08:15.063910  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:15.063916  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:15.063977  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:15.094824  620795 cri.go:89] found id: ""
	I1213 12:08:15.094857  620795 logs.go:282] 0 containers: []
	W1213 12:08:15.094867  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:15.094876  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:15.094888  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:15.123857  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:15.123926  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:15.189408  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:15.189444  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:15.208112  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:15.208143  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:15.272770  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:15.265015   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.265421   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.266883   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.267540   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.269262   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:15.265015   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.265421   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.266883   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.267540   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.269262   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:15.272794  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:15.272806  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:17.802242  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:17.818907  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:17.818976  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:17.860553  620795 cri.go:89] found id: ""
	I1213 12:08:17.860577  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.860586  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:17.860594  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:17.860663  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:17.890844  620795 cri.go:89] found id: ""
	I1213 12:08:17.890868  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.890877  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:17.890883  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:17.890937  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:17.916758  620795 cri.go:89] found id: ""
	I1213 12:08:17.916784  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.916794  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:17.916800  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:17.916860  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:17.946527  620795 cri.go:89] found id: ""
	I1213 12:08:17.946564  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.946573  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:17.946598  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:17.946684  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:17.971981  620795 cri.go:89] found id: ""
	I1213 12:08:17.972004  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.972013  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:17.972020  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:17.972075  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:17.997005  620795 cri.go:89] found id: ""
	I1213 12:08:17.997042  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.997052  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:17.997059  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:17.997126  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:18.029007  620795 cri.go:89] found id: ""
	I1213 12:08:18.029038  620795 logs.go:282] 0 containers: []
	W1213 12:08:18.029054  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:18.029061  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:18.029120  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:18.056596  620795 cri.go:89] found id: ""
	I1213 12:08:18.056625  620795 logs.go:282] 0 containers: []
	W1213 12:08:18.056637  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:18.056647  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:18.056661  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:18.074846  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:18.074874  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:18.144092  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:18.136489   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.137142   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.138620   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.139127   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.140582   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:18.136489   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.137142   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.138620   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.139127   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.140582   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:18.144157  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:18.144176  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:18.173096  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:18.173134  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:18.208914  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:18.208943  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:08:19.037143  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:21.537005  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:20.774528  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:20.788572  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:20.788639  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:20.858764  620795 cri.go:89] found id: ""
	I1213 12:08:20.858786  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.858794  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:20.858800  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:20.858857  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:20.887866  620795 cri.go:89] found id: ""
	I1213 12:08:20.887888  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.887897  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:20.887904  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:20.887967  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:20.918367  620795 cri.go:89] found id: ""
	I1213 12:08:20.918438  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.918462  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:20.918481  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:20.918566  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:20.943267  620795 cri.go:89] found id: ""
	I1213 12:08:20.943292  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.943301  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:20.943308  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:20.943362  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:20.972672  620795 cri.go:89] found id: ""
	I1213 12:08:20.972707  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.972716  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:20.972723  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:20.972781  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:20.997368  620795 cri.go:89] found id: ""
	I1213 12:08:20.997394  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.997404  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:20.997411  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:20.997487  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:21.029283  620795 cri.go:89] found id: ""
	I1213 12:08:21.029309  620795 logs.go:282] 0 containers: []
	W1213 12:08:21.029319  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:21.029328  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:21.029382  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:21.054485  620795 cri.go:89] found id: ""
	I1213 12:08:21.054510  620795 logs.go:282] 0 containers: []
	W1213 12:08:21.054520  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:21.054529  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:21.054540  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:21.121036  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:21.121073  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:21.137498  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:21.137526  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:21.201021  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:21.192527   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.193441   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.195064   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.195396   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.196967   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:21.192527   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.193441   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.195064   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.195396   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.196967   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:21.201047  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:21.201060  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:21.233120  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:21.233155  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:23.768528  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:23.784788  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:23.784875  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:23.861902  620795 cri.go:89] found id: ""
	I1213 12:08:23.861933  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.861949  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:23.861956  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:23.862019  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:23.890007  620795 cri.go:89] found id: ""
	I1213 12:08:23.890029  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.890038  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:23.890044  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:23.890104  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:23.915427  620795 cri.go:89] found id: ""
	I1213 12:08:23.915450  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.915459  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:23.915465  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:23.915550  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:23.941041  620795 cri.go:89] found id: ""
	I1213 12:08:23.941069  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.941078  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:23.941085  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:23.941141  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:23.966860  620795 cri.go:89] found id: ""
	I1213 12:08:23.966886  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.966895  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:23.966902  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:23.966958  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:23.992499  620795 cri.go:89] found id: ""
	I1213 12:08:23.992528  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.992537  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:23.992558  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:23.992616  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:24.019996  620795 cri.go:89] found id: ""
	I1213 12:08:24.020030  620795 logs.go:282] 0 containers: []
	W1213 12:08:24.020045  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:24.020052  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:24.020129  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:24.047181  620795 cri.go:89] found id: ""
	I1213 12:08:24.047216  620795 logs.go:282] 0 containers: []
	W1213 12:08:24.047225  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:24.047234  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:24.047245  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:24.110372  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:24.102615   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.103224   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.104739   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.105164   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.106663   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:24.102615   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.103224   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.104739   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.105164   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.106663   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:24.110398  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:24.110412  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:24.139714  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:24.139748  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:24.172397  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:24.172426  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:08:24.037139  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:26.537138  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:24.240938  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:24.240975  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:26.757922  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:26.771140  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:26.771256  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:26.808049  620795 cri.go:89] found id: ""
	I1213 12:08:26.808124  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.808149  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:26.808169  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:26.808258  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:26.845750  620795 cri.go:89] found id: ""
	I1213 12:08:26.845826  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.845851  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:26.845870  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:26.845951  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:26.885327  620795 cri.go:89] found id: ""
	I1213 12:08:26.885401  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.885424  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:26.885444  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:26.885533  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:26.912813  620795 cri.go:89] found id: ""
	I1213 12:08:26.912844  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.912853  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:26.912860  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:26.912917  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:26.940224  620795 cri.go:89] found id: ""
	I1213 12:08:26.940301  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.940317  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:26.940325  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:26.940383  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:26.970684  620795 cri.go:89] found id: ""
	I1213 12:08:26.970728  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.970738  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:26.970745  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:26.970825  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:27.001739  620795 cri.go:89] found id: ""
	I1213 12:08:27.001821  620795 logs.go:282] 0 containers: []
	W1213 12:08:27.001846  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:27.001867  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:27.001968  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:27.029502  620795 cri.go:89] found id: ""
	I1213 12:08:27.029525  620795 logs.go:282] 0 containers: []
	W1213 12:08:27.029533  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:27.029542  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:27.029561  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:27.097411  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:27.090200   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.090583   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.092154   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.092579   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.093994   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:27.090200   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.090583   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.092154   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.092579   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.093994   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:27.097433  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:27.097445  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:27.126207  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:27.126242  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:27.152776  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:27.152814  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:27.218430  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:27.218466  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:08:29.036447  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:31.536317  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:29.735087  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:29.746276  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:29.746353  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:29.790488  620795 cri.go:89] found id: ""
	I1213 12:08:29.790563  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.790587  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:29.790607  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:29.790694  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:29.863661  620795 cri.go:89] found id: ""
	I1213 12:08:29.863730  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.863747  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:29.863754  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:29.863822  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:29.889696  620795 cri.go:89] found id: ""
	I1213 12:08:29.889723  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.889731  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:29.889738  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:29.889793  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:29.917557  620795 cri.go:89] found id: ""
	I1213 12:08:29.917619  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.917642  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:29.917657  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:29.917732  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:29.941179  620795 cri.go:89] found id: ""
	I1213 12:08:29.941201  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.941210  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:29.941217  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:29.941276  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:29.965683  620795 cri.go:89] found id: ""
	I1213 12:08:29.965758  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.965775  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:29.965783  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:29.965858  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:29.994076  620795 cri.go:89] found id: ""
	I1213 12:08:29.994111  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.994121  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:29.994127  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:29.994189  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:30.034696  620795 cri.go:89] found id: ""
	I1213 12:08:30.034723  620795 logs.go:282] 0 containers: []
	W1213 12:08:30.034733  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:30.034743  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:30.034756  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:30.103277  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:30.103319  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:30.120811  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:30.120901  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:30.194375  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:30.185897   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.186387   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.187817   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.188577   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.190599   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:30.185897   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.186387   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.187817   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.188577   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.190599   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:30.194399  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:30.194412  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:30.225794  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:30.225830  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:32.757391  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:32.768065  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:32.768178  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:32.801083  620795 cri.go:89] found id: ""
	I1213 12:08:32.801105  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.801114  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:32.801123  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:32.801179  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:32.839546  620795 cri.go:89] found id: ""
	I1213 12:08:32.839567  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.839576  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:32.839582  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:32.839637  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:32.888939  620795 cri.go:89] found id: ""
	I1213 12:08:32.889005  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.889029  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:32.889044  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:32.889115  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:32.926624  620795 cri.go:89] found id: ""
	I1213 12:08:32.926651  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.926666  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:32.926676  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:32.926752  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:32.958800  620795 cri.go:89] found id: ""
	I1213 12:08:32.958835  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.958844  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:32.958850  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:32.958916  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:32.989617  620795 cri.go:89] found id: ""
	I1213 12:08:32.989692  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.989708  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:32.989721  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:32.989791  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:33.017551  620795 cri.go:89] found id: ""
	I1213 12:08:33.017623  620795 logs.go:282] 0 containers: []
	W1213 12:08:33.017647  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:33.017659  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:33.017736  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:33.043587  620795 cri.go:89] found id: ""
	I1213 12:08:33.043612  620795 logs.go:282] 0 containers: []
	W1213 12:08:33.043621  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:33.043632  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:33.043644  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:33.114830  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:33.105828   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.106521   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.108296   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.108871   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.110537   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:33.105828   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.106521   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.108296   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.108871   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.110537   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:33.114904  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:33.114923  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:33.144060  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:33.144098  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:33.174527  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:33.174559  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:33.242589  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:33.242622  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:08:33.536995  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:35.537098  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:38.037111  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:35.760100  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:35.770376  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:35.770444  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:35.803335  620795 cri.go:89] found id: ""
	I1213 12:08:35.803356  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.803365  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:35.803371  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:35.803427  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:35.837892  620795 cri.go:89] found id: ""
	I1213 12:08:35.837916  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.837926  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:35.837933  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:35.837989  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:35.866561  620795 cri.go:89] found id: ""
	I1213 12:08:35.866588  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.866598  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:35.866605  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:35.866667  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:35.892759  620795 cri.go:89] found id: ""
	I1213 12:08:35.892795  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.892804  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:35.892810  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:35.892880  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:35.923215  620795 cri.go:89] found id: ""
	I1213 12:08:35.923238  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.923247  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:35.923252  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:35.923310  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:35.950448  620795 cri.go:89] found id: ""
	I1213 12:08:35.950475  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.950484  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:35.950491  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:35.950546  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:35.976121  620795 cri.go:89] found id: ""
	I1213 12:08:35.976149  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.976158  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:35.976165  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:35.976247  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:36.007726  620795 cri.go:89] found id: ""
	I1213 12:08:36.007754  620795 logs.go:282] 0 containers: []
	W1213 12:08:36.007765  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:36.007774  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:36.007789  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:36.085423  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:36.085465  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:36.104590  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:36.104621  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:36.174734  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:36.166755   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.167389   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.169214   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.169622   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.171073   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:36.166755   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.167389   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.169214   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.169622   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.171073   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:36.174757  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:36.174771  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:36.204232  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:36.204271  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:38.733384  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:38.744052  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:38.744118  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:38.780661  620795 cri.go:89] found id: ""
	I1213 12:08:38.780685  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.780694  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:38.780704  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:38.780764  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:38.822383  620795 cri.go:89] found id: ""
	I1213 12:08:38.822407  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.822416  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:38.822422  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:38.822477  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:38.855498  620795 cri.go:89] found id: ""
	I1213 12:08:38.855544  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.855553  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:38.855565  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:38.855619  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:38.885018  620795 cri.go:89] found id: ""
	I1213 12:08:38.885045  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.885055  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:38.885062  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:38.885119  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:38.910126  620795 cri.go:89] found id: ""
	I1213 12:08:38.910162  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.910172  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:38.910179  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:38.910246  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:38.940467  620795 cri.go:89] found id: ""
	I1213 12:08:38.940502  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.940513  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:38.940520  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:38.940597  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:38.966188  620795 cri.go:89] found id: ""
	I1213 12:08:38.966222  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.966232  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:38.966238  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:38.966303  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:38.995881  620795 cri.go:89] found id: ""
	I1213 12:08:38.995907  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.995917  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:38.995927  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:38.995939  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:39.015887  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:39.015917  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:39.098130  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:39.090344   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.090891   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.092783   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.093197   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.094699   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:39.090344   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.090891   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.092783   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.093197   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.094699   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:39.098150  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:39.098163  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:39.126236  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:39.126269  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:39.153815  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:39.153842  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:08:40.037886  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:42.536996  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:41.721729  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:41.732158  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:41.732229  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:41.760995  620795 cri.go:89] found id: ""
	I1213 12:08:41.761017  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.761026  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:41.761033  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:41.761087  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:41.795082  620795 cri.go:89] found id: ""
	I1213 12:08:41.795105  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.795113  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:41.795119  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:41.795184  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:41.825959  620795 cri.go:89] found id: ""
	I1213 12:08:41.826033  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.826056  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:41.826076  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:41.826159  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:41.852118  620795 cri.go:89] found id: ""
	I1213 12:08:41.852183  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.852198  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:41.852205  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:41.852261  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:41.877587  620795 cri.go:89] found id: ""
	I1213 12:08:41.877626  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.877636  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:41.877642  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:41.877706  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:41.906166  620795 cri.go:89] found id: ""
	I1213 12:08:41.906192  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.906202  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:41.906216  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:41.906273  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:41.935663  620795 cri.go:89] found id: ""
	I1213 12:08:41.935688  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.935697  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:41.935704  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:41.935761  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:41.960919  620795 cri.go:89] found id: ""
	I1213 12:08:41.960943  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.960952  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:41.960960  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:41.960971  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:41.989438  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:41.989472  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:42.026694  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:42.026779  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:42.120242  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:42.120297  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:42.141212  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:42.141246  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:42.216949  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:42.207789   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.208642   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.210144   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.210924   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.212786   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:42.207789   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.208642   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.210144   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.210924   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.212786   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:08:44.537110  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:47.036204  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:44.717236  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:44.728891  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:44.728977  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:44.753976  620795 cri.go:89] found id: ""
	I1213 12:08:44.754000  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.754008  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:44.754018  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:44.754078  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:44.786705  620795 cri.go:89] found id: ""
	I1213 12:08:44.786732  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.786741  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:44.786748  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:44.786806  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:44.822299  620795 cri.go:89] found id: ""
	I1213 12:08:44.822328  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.822337  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:44.822345  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:44.822401  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:44.856823  620795 cri.go:89] found id: ""
	I1213 12:08:44.856856  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.856867  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:44.856873  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:44.856930  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:44.882589  620795 cri.go:89] found id: ""
	I1213 12:08:44.882614  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.882623  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:44.882630  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:44.882688  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:44.908466  620795 cri.go:89] found id: ""
	I1213 12:08:44.908491  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.908500  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:44.908507  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:44.908588  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:44.937829  620795 cri.go:89] found id: ""
	I1213 12:08:44.937856  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.937865  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:44.937872  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:44.937927  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:44.963281  620795 cri.go:89] found id: ""
	I1213 12:08:44.963305  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.963315  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:44.963324  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:44.963335  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:44.991410  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:44.991446  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:45.037106  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:45.037139  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:45.136316  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:45.136362  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:45.159600  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:45.159635  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:45.275736  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:45.264960   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.265716   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.268688   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.269926   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.271240   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:45.264960   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.265716   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.268688   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.269926   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.271240   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:47.775978  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:47.794424  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:47.794535  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:47.822730  620795 cri.go:89] found id: ""
	I1213 12:08:47.822773  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.822782  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:47.822794  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:47.822874  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:47.855882  620795 cri.go:89] found id: ""
	I1213 12:08:47.855909  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.855921  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:47.855928  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:47.855992  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:47.880824  620795 cri.go:89] found id: ""
	I1213 12:08:47.880849  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.880863  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:47.880870  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:47.880944  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:47.905536  620795 cri.go:89] found id: ""
	I1213 12:08:47.905558  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.905567  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:47.905573  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:47.905627  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:47.930629  620795 cri.go:89] found id: ""
	I1213 12:08:47.930651  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.930660  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:47.930666  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:47.930722  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:47.963310  620795 cri.go:89] found id: ""
	I1213 12:08:47.963340  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.963348  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:47.963355  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:47.963416  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:47.988259  620795 cri.go:89] found id: ""
	I1213 12:08:47.988284  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.988293  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:47.988300  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:47.988363  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:48.016297  620795 cri.go:89] found id: ""
	I1213 12:08:48.016324  620795 logs.go:282] 0 containers: []
	W1213 12:08:48.016334  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:48.016344  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:48.016358  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:48.036992  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:48.037157  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:48.110165  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:48.102261   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.102875   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.104540   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.105094   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.106601   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:48.102261   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.102875   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.104540   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.105094   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.106601   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:48.110186  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:48.110199  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:48.138855  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:48.138892  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:48.167128  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:48.167162  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:08:49.537098  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:52.036223  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:50.735817  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:50.746548  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:50.746616  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:50.775549  620795 cri.go:89] found id: ""
	I1213 12:08:50.775575  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.775585  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:50.775591  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:50.775646  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:50.804612  620795 cri.go:89] found id: ""
	I1213 12:08:50.804635  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.804644  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:50.804650  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:50.804705  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:50.837625  620795 cri.go:89] found id: ""
	I1213 12:08:50.837650  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.837659  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:50.837665  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:50.837720  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:50.864589  620795 cri.go:89] found id: ""
	I1213 12:08:50.864612  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.864620  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:50.864627  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:50.864687  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:50.889551  620795 cri.go:89] found id: ""
	I1213 12:08:50.889575  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.889583  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:50.889589  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:50.889646  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:50.919224  620795 cri.go:89] found id: ""
	I1213 12:08:50.919247  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.919255  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:50.919261  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:50.919317  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:50.944422  620795 cri.go:89] found id: ""
	I1213 12:08:50.944495  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.944574  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:50.944612  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:50.944696  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:50.970021  620795 cri.go:89] found id: ""
	I1213 12:08:50.970086  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.970109  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:50.970132  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:50.970163  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:50.986872  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:50.986906  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:51.060506  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:51.052011   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.052816   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.054613   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.055181   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.056812   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:51.052011   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.052816   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.054613   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.055181   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.056812   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:51.060540  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:51.060552  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:51.092480  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:51.092521  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:51.123102  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:51.123131  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:53.694152  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:53.705704  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:53.705773  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:53.731245  620795 cri.go:89] found id: ""
	I1213 12:08:53.731268  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.731276  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:53.731282  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:53.731340  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:53.757925  620795 cri.go:89] found id: ""
	I1213 12:08:53.757957  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.757966  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:53.757973  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:53.758036  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:53.808536  620795 cri.go:89] found id: ""
	I1213 12:08:53.808559  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.808568  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:53.808575  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:53.808635  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:53.840078  620795 cri.go:89] found id: ""
	I1213 12:08:53.840112  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.840122  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:53.840129  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:53.840189  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:53.865894  620795 cri.go:89] found id: ""
	I1213 12:08:53.865917  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.865927  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:53.865933  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:53.865993  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:53.891498  620795 cri.go:89] found id: ""
	I1213 12:08:53.891542  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.891551  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:53.891558  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:53.891621  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:53.917936  620795 cri.go:89] found id: ""
	I1213 12:08:53.917959  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.917968  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:53.917974  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:53.918032  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:53.943098  620795 cri.go:89] found id: ""
	I1213 12:08:53.943169  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.943193  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:53.943215  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:53.943252  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:53.971597  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:53.971637  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:54.002508  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:54.002540  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:54.080813  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:54.080899  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:54.109629  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:54.109659  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:54.177694  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:54.170109   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.170817   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.172367   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.172694   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.174239   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:54.170109   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.170817   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.172367   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.172694   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.174239   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:08:54.036977  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:56.537074  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:56.677966  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:56.688667  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:56.688741  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:56.713668  620795 cri.go:89] found id: ""
	I1213 12:08:56.713690  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.713699  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:56.713706  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:56.713762  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:56.741202  620795 cri.go:89] found id: ""
	I1213 12:08:56.741227  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.741236  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:56.741242  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:56.741339  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:56.768922  620795 cri.go:89] found id: ""
	I1213 12:08:56.768942  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.768950  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:56.768957  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:56.769013  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:56.797125  620795 cri.go:89] found id: ""
	I1213 12:08:56.797148  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.797157  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:56.797164  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:56.797218  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:56.824672  620795 cri.go:89] found id: ""
	I1213 12:08:56.824695  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.824703  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:56.824709  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:56.824763  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:56.849420  620795 cri.go:89] found id: ""
	I1213 12:08:56.849446  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.849455  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:56.849462  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:56.849516  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:56.875118  620795 cri.go:89] found id: ""
	I1213 12:08:56.875143  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.875152  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:56.875158  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:56.875213  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:56.900386  620795 cri.go:89] found id: ""
	I1213 12:08:56.900411  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.900420  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:56.900434  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:56.900446  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:56.966130  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:56.966167  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:56.982745  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:56.982773  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:57.073125  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:57.063683   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.064467   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.066129   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.066624   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.068003   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:57.063683   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.064467   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.066129   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.066624   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.068003   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:57.073146  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:57.073165  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:57.104552  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:57.104585  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:59.636110  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:59.649509  620795 out.go:203] 
	W1213 12:08:59.652376  620795 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 12:08:59.652409  620795 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 12:08:59.652418  620795 out.go:285] * Related issues:
	W1213 12:08:59.652431  620795 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 12:08:59.652444  620795 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 12:08:59.655226  620795 out.go:203] 
	W1213 12:08:59.037102  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:09:01.536950  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:09:03.536998  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:09:06.036283  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	
	
	==> CRI-O <==
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.494217646Z" level=info msg="Using the internal default seccomp profile"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.494225302Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.494232317Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.49423788Z" level=info msg="RDT not available in the host system"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.49425041Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.495095045Z" level=info msg="Conmon does support the --sync option"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.495116264Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.495131451Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.495779293Z" level=info msg="Conmon does support the --sync option"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.49580189Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.495943824Z" level=info msg="Updated default CNI network name to "
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.496641734Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci
/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_m
emory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_di
r = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [cr
io.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.497083731Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.497162501Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560451228Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560494723Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.56056025Z" level=info msg="Create NRI interface"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560660304Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560668633Z" level=info msg="runtime interface created"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.56068309Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560689769Z" level=info msg="runtime interface starting up..."
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560695849Z" level=info msg="starting plugins..."
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560708797Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560776564Z" level=info msg="No systemd watchdog enabled"
	Dec 13 12:02:55 newest-cni-800979 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:09:09.915813   13816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:09:09.916543   13816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:09:09.918216   13816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:09:09.918658   13816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:09:09.920144   13816 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	[Dec13 11:46] overlayfs: idmapped layers are currently not supported
	[ +24.639766] overlayfs: idmapped layers are currently not supported
	[ +18.732422] overlayfs: idmapped layers are currently not supported
	[Dec13 11:47] overlayfs: idmapped layers are currently not supported
	[Dec13 11:48] overlayfs: idmapped layers are currently not supported
	[Dec13 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.618483] overlayfs: idmapped layers are currently not supported
	[Dec13 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.749488] overlayfs: idmapped layers are currently not supported
	[Dec13 11:52] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 12:09:09 up  3:51,  0 user,  load average: 0.74, 0.79, 1.22
	Linux newest-cni-800979 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 12:09:07 newest-cni-800979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:09:08 newest-cni-800979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 493.
	Dec 13 12:09:08 newest-cni-800979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:08 newest-cni-800979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:08 newest-cni-800979 kubelet[13700]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:08 newest-cni-800979 kubelet[13700]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:08 newest-cni-800979 kubelet[13700]: E1213 12:09:08.132547   13700 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:09:08 newest-cni-800979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:09:08 newest-cni-800979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:09:08 newest-cni-800979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 494.
	Dec 13 12:09:08 newest-cni-800979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:08 newest-cni-800979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:08 newest-cni-800979 kubelet[13720]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:08 newest-cni-800979 kubelet[13720]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:08 newest-cni-800979 kubelet[13720]: E1213 12:09:08.917246   13720 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:09:08 newest-cni-800979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:09:08 newest-cni-800979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:09:09 newest-cni-800979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 495.
	Dec 13 12:09:09 newest-cni-800979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:09 newest-cni-800979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:09 newest-cni-800979 kubelet[13792]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:09 newest-cni-800979 kubelet[13792]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:09 newest-cni-800979 kubelet[13792]: E1213 12:09:09.824042   13792 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:09:09 newest-cni-800979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:09:09 newest-cni-800979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-800979 -n newest-cni-800979
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-800979 -n newest-cni-800979: exit status 2 (518.612459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-800979" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-800979
helpers_test.go:244: (dbg) docker inspect newest-cni-800979:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef",
	        "Created": "2025-12-13T11:52:51.619651061Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 620923,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T12:02:49.509239436Z",
	            "FinishedAt": "2025-12-13T12:02:48.165379431Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef/hosts",
	        "LogPath": "/var/lib/docker/containers/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef/4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef-json.log",
	        "Name": "/newest-cni-800979",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-800979:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-800979",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4aef671a766b58164c3cd01dd454b6e4385766e2c6d5ed317018b324ca7344ef",
	                "LowerDir": "/var/lib/docker/overlay2/c7d2cc87bdf8f5a9a60e544f17bca9528f6384a57e9d470177b306242d8113d5-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7d2cc87bdf8f5a9a60e544f17bca9528f6384a57e9d470177b306242d8113d5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7d2cc87bdf8f5a9a60e544f17bca9528f6384a57e9d470177b306242d8113d5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7d2cc87bdf8f5a9a60e544f17bca9528f6384a57e9d470177b306242d8113d5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-800979",
	                "Source": "/var/lib/docker/volumes/newest-cni-800979/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-800979",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-800979",
	                "name.minikube.sigs.k8s.io": "newest-cni-800979",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "24ac9a215b72ee124284f478ff764304afc09b82226a2739c7b5f0f9a84a05cd",
	            "SandboxKey": "/var/run/docker/netns/24ac9a215b72",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-800979": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:2e:cf:d5:d1:e9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de59fc08c8081b0c37df8bacf82db2ccccb307596588e9c22d7d094938935e3c",
	                    "EndpointID": "4aeedc678fe23c218965caf6e08605f8464cbaa26208ec7a8c460ea48b3e8143",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-800979",
	                        "4aef671a766b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-800979 -n newest-cni-800979
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-800979 -n newest-cni-800979: exit status 2 (445.857818ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-800979 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-800979 logs -n 25: (3.108296026s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ stop    │ -p embed-certs-326948 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-326948 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:51 UTC │
	│ start   │ -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:51 UTC │ 13 Dec 25 11:52 UTC │
	│ image   │ default-k8s-diff-port-151605 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p default-k8s-diff-port-151605 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p default-k8s-diff-port-151605                                                                                                                                                                                                                      │ default-k8s-diff-port-151605 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p disable-driver-mounts-072590                                                                                                                                                                                                                      │ disable-driver-mounts-072590 │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ image   │ embed-certs-326948 image list --format=json                                                                                                                                                                                                          │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ pause   │ -p embed-certs-326948 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ delete  │ -p embed-certs-326948                                                                                                                                                                                                                                │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ delete  │ -p embed-certs-326948                                                                                                                                                                                                                                │ embed-certs-326948           │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │ 13 Dec 25 11:52 UTC │
	│ start   │ -p newest-cni-800979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 11:52 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-307409 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:00 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-800979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:01 UTC │                     │
	│ stop    │ -p newest-cni-800979 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:02 UTC │ 13 Dec 25 12:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-800979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:02 UTC │ 13 Dec 25 12:02 UTC │
	│ start   │ -p newest-cni-800979 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:02 UTC │                     │
	│ stop    │ -p no-preload-307409 --alsologtostderr -v=3                                                                                                                                                                                                          │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:03 UTC │ 13 Dec 25 12:03 UTC │
	│ addons  │ enable dashboard -p no-preload-307409 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:03 UTC │ 13 Dec 25 12:03 UTC │
	│ start   │ -p no-preload-307409 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-307409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:03 UTC │                     │
	│ image   │ newest-cni-800979 image list --format=json                                                                                                                                                                                                           │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:09 UTC │ 13 Dec 25 12:09 UTC │
	│ pause   │ -p newest-cni-800979 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:09 UTC │ 13 Dec 25 12:09 UTC │
	│ unpause │ -p newest-cni-800979 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-800979            │ jenkins │ v1.37.0 │ 13 Dec 25 12:09 UTC │ 13 Dec 25 12:09 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 12:03:03
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 12:03:03.050063  622913 out.go:360] Setting OutFile to fd 1 ...
	I1213 12:03:03.050285  622913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:03:03.050312  622913 out.go:374] Setting ErrFile to fd 2...
	I1213 12:03:03.050330  622913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:03:03.050625  622913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 12:03:03.051085  622913 out.go:368] Setting JSON to false
	I1213 12:03:03.052120  622913 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13535,"bootTime":1765613848,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 12:03:03.052229  622913 start.go:143] virtualization:  
	I1213 12:03:03.055383  622913 out.go:179] * [no-preload-307409] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 12:03:03.059239  622913 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 12:03:03.059332  622913 notify.go:221] Checking for updates...
	I1213 12:03:03.064728  622913 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 12:03:03.067859  622913 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:03:03.070706  622913 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 12:03:03.073576  622913 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 12:03:03.076392  622913 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 12:03:03.079655  622913 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:03:03.080246  622913 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 12:03:03.113231  622913 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 12:03:03.113356  622913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:03:03.174414  622913 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 12:03:03.164880125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:03:03.174536  622913 docker.go:319] overlay module found
	I1213 12:03:03.177638  622913 out.go:179] * Using the docker driver based on existing profile
	I1213 12:03:03.180320  622913 start.go:309] selected driver: docker
	I1213 12:03:03.180343  622913 start.go:927] validating driver "docker" against &{Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:03:03.180449  622913 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 12:03:03.181174  622913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:03:03.236517  622913 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 12:03:03.227319129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:03:03.236860  622913 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 12:03:03.236895  622913 cni.go:84] Creating CNI manager for ""
	I1213 12:03:03.236967  622913 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 12:03:03.237012  622913 start.go:353] cluster config:
	{Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:03:03.241932  622913 out.go:179] * Starting "no-preload-307409" primary control-plane node in "no-preload-307409" cluster
	I1213 12:03:03.244777  622913 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 12:03:03.247722  622913 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 12:03:03.250567  622913 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 12:03:03.250698  622913 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 12:03:03.250725  622913 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/config.json ...
	I1213 12:03:03.251056  622913 cache.go:107] acquiring lock: {Name:mkf4d74369c8245ecb55fb0e29b8225ca9f09ff5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251142  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1213 12:03:03.251161  622913 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 117.655µs
	I1213 12:03:03.251175  622913 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1213 12:03:03.251192  622913 cache.go:107] acquiring lock: {Name:mkb6b336872403a4d868a5d769900fdf1066c1c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251240  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1213 12:03:03.251249  622913 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 59.291µs
	I1213 12:03:03.251256  622913 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1213 12:03:03.251279  622913 cache.go:107] acquiring lock: {Name:mkafdfd911f389f1e02c51849a66241927a5c213 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251318  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1213 12:03:03.251329  622913 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 50.749µs
	I1213 12:03:03.251341  622913 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1213 12:03:03.251360  622913 cache.go:107] acquiring lock: {Name:mk8f79409d2ca53ad062fcf0126f6980a6193bdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251395  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1213 12:03:03.251406  622913 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 49.043µs
	I1213 12:03:03.251413  622913 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1213 12:03:03.251422  622913 cache.go:107] acquiring lock: {Name:mk2037397f0606151b65f1037a4650bdb91f57be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251455  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1213 12:03:03.251465  622913 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 43.717µs
	I1213 12:03:03.251472  622913 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1213 12:03:03.251481  622913 cache.go:107] acquiring lock: {Name:mkcce925699bd9689e329c60f570e109b24fe773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251564  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1213 12:03:03.251578  622913 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 97.437µs
	I1213 12:03:03.251585  622913 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1213 12:03:03.251596  622913 cache.go:107] acquiring lock: {Name:mk7409e8a480c483310652cd8f23d5f9940a03a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251632  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1213 12:03:03.251642  622913 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 47.82µs
	I1213 12:03:03.251649  622913 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1213 12:03:03.251673  622913 cache.go:107] acquiring lock: {Name:mk4ff965cf9ab0943f63cb9d5079b89d443629ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.251707  622913 cache.go:115] /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1213 12:03:03.251716  622913 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 48.632µs
	I1213 12:03:03.251723  622913 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1213 12:03:03.251729  622913 cache.go:87] Successfully saved all images to host disk.
	I1213 12:03:03.282338  622913 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 12:03:03.282369  622913 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 12:03:03.282443  622913 cache.go:243] Successfully downloaded all kic artifacts
	I1213 12:03:03.282477  622913 start.go:360] acquireMachinesLock for no-preload-307409: {Name:mk5b591d9d6f446a65ecf56605831e84fbfd4c88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:03:03.282544  622913 start.go:364] duration metric: took 41.937µs to acquireMachinesLock for "no-preload-307409"
	I1213 12:03:03.282565  622913 start.go:96] Skipping create...Using existing machine configuration
	I1213 12:03:03.282570  622913 fix.go:54] fixHost starting: 
	I1213 12:03:03.282851  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:03.304419  622913 fix.go:112] recreateIfNeeded on no-preload-307409: state=Stopped err=<nil>
	W1213 12:03:03.304448  622913 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 12:02:59.273796  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:02:59.310724  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:02:59.374429  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.374460  620795 retry.go:31] will retry after 1.123869523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.660188  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:02:59.746796  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.746834  620795 retry.go:31] will retry after 827.424249ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.773951  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:02:59.886643  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:02:59.984018  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:02:59.984054  620795 retry.go:31] will retry after 1.031600228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:00.289311  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:00.498512  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:00.574703  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:00.609412  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:00.609443  620795 retry.go:31] will retry after 1.594897337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:00.654022  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:00.654055  620795 retry.go:31] will retry after 1.847551508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:00.773391  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:01.016343  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:01.149191  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:01.149241  620795 retry.go:31] will retry after 1.156400239s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:01.273296  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:01.773106  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:02.204552  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:02.273738  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:02.274099  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.274136  620795 retry.go:31] will retry after 1.092655081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.305854  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:02.368964  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.369001  620795 retry.go:31] will retry after 1.680740365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.502311  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:02.587589  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.587627  620795 retry.go:31] will retry after 1.930642019s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:02.773890  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:03.281133  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:03.367295  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:03.462797  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:03.462834  620795 retry.go:31] will retry after 1.480584037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:03.773095  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:04.050289  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:04.211663  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:04.211692  620795 retry.go:31] will retry after 4.628682765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:03.307872  622913 out.go:252] * Restarting existing docker container for "no-preload-307409" ...
	I1213 12:03:03.307964  622913 cli_runner.go:164] Run: docker start no-preload-307409
	I1213 12:03:03.599368  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:03.618935  622913 kic.go:430] container "no-preload-307409" state is running.
	I1213 12:03:03.619319  622913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 12:03:03.641333  622913 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/config.json ...
	I1213 12:03:03.641563  622913 machine.go:94] provisionDockerMachine start ...
	I1213 12:03:03.641633  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:03.663338  622913 main.go:143] libmachine: Using SSH client type: native
	I1213 12:03:03.663870  622913 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1213 12:03:03.663890  622913 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 12:03:03.664580  622913 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 12:03:06.819092  622913 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-307409
	
	I1213 12:03:06.819117  622913 ubuntu.go:182] provisioning hostname "no-preload-307409"
	I1213 12:03:06.819201  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:06.837856  622913 main.go:143] libmachine: Using SSH client type: native
	I1213 12:03:06.838181  622913 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1213 12:03:06.838198  622913 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-307409 && echo "no-preload-307409" | sudo tee /etc/hostname
	I1213 12:03:06.997122  622913 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-307409
	
	I1213 12:03:06.997203  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:07.016669  622913 main.go:143] libmachine: Using SSH client type: native
	I1213 12:03:07.017014  622913 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1213 12:03:07.017037  622913 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-307409' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-307409/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-307409' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 12:03:07.176125  622913 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 12:03:07.176151  622913 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 12:03:07.176182  622913 ubuntu.go:190] setting up certificates
	I1213 12:03:07.176201  622913 provision.go:84] configureAuth start
	I1213 12:03:07.176265  622913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 12:03:07.193873  622913 provision.go:143] copyHostCerts
	I1213 12:03:07.193961  622913 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 12:03:07.193973  622913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 12:03:07.194049  622913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 12:03:07.194164  622913 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 12:03:07.194175  622913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 12:03:07.194205  622913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 12:03:07.194267  622913 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 12:03:07.194275  622913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 12:03:07.194298  622913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 12:03:07.194346  622913 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.no-preload-307409 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-307409]
	I1213 12:03:07.397856  622913 provision.go:177] copyRemoteCerts
	I1213 12:03:07.397930  622913 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 12:03:07.397969  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:07.415003  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:07.523762  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 12:03:07.541934  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 12:03:07.560353  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 12:03:07.577524  622913 provision.go:87] duration metric: took 401.305633ms to configureAuth
	I1213 12:03:07.577567  622913 ubuntu.go:206] setting minikube options for container-runtime
	I1213 12:03:07.577753  622913 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:03:07.577860  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:07.595178  622913 main.go:143] libmachine: Using SSH client type: native
	I1213 12:03:07.595492  622913 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1213 12:03:07.595506  622913 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 12:03:07.957883  622913 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 12:03:07.957909  622913 machine.go:97] duration metric: took 4.316335928s to provisionDockerMachine
	I1213 12:03:07.957921  622913 start.go:293] postStartSetup for "no-preload-307409" (driver="docker")
	I1213 12:03:07.957933  622913 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 12:03:07.958002  622913 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 12:03:07.958068  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:07.976949  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:04.273235  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:04.518978  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:04.583937  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:04.583972  620795 retry.go:31] will retry after 4.359648713s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:04.773380  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:04.944170  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:05.011259  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:05.011298  620795 retry.go:31] will retry after 2.730254551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:05.273717  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:05.773164  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:06.274023  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:06.773331  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:07.273766  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:07.742621  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:07.773999  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:07.885064  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:07.885095  620795 retry.go:31] will retry after 5.399825259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:08.273766  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:08.773645  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:08.841141  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:08.935930  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:08.935967  620795 retry.go:31] will retry after 8.567303782s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:08.944298  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:09.032112  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:09.032154  620795 retry.go:31] will retry after 7.715566724s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:08.088342  622913 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 12:03:08.091929  622913 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 12:03:08.092010  622913 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 12:03:08.092029  622913 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 12:03:08.092100  622913 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 12:03:08.092225  622913 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 12:03:08.092336  622913 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 12:03:08.100328  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 12:03:08.119806  622913 start.go:296] duration metric: took 161.868607ms for postStartSetup
	I1213 12:03:08.119893  622913 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 12:03:08.119935  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:08.137272  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:08.240715  622913 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 12:03:08.245595  622913 fix.go:56] duration metric: took 4.963017027s for fixHost
	I1213 12:03:08.245624  622913 start.go:83] releasing machines lock for "no-preload-307409", held for 4.963070517s
	I1213 12:03:08.245713  622913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-307409
	I1213 12:03:08.262782  622913 ssh_runner.go:195] Run: cat /version.json
	I1213 12:03:08.262844  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:08.263126  622913 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 12:03:08.263189  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:08.283140  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:08.296409  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:08.391353  622913 ssh_runner.go:195] Run: systemctl --version
	I1213 12:03:08.484408  622913 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 12:03:08.531460  622913 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 12:03:08.537034  622913 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 12:03:08.537102  622913 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 12:03:08.548165  622913 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 12:03:08.548229  622913 start.go:496] detecting cgroup driver to use...
	I1213 12:03:08.548280  622913 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 12:03:08.548375  622913 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 12:03:08.564936  622913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 12:03:08.579568  622913 docker.go:218] disabling cri-docker service (if available) ...
	I1213 12:03:08.579670  622913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 12:03:08.596861  622913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 12:03:08.610443  622913 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 12:03:08.718052  622913 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 12:03:08.841997  622913 docker.go:234] disabling docker service ...
	I1213 12:03:08.842083  622913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 12:03:08.857246  622913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 12:03:08.871656  622913 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 12:03:09.021847  622913 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 12:03:09.148277  622913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 12:03:09.162720  622913 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 12:03:09.178582  622913 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 12:03:09.178712  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.188481  622913 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 12:03:09.188600  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.198182  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.207488  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.217314  622913 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 12:03:09.225728  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.234602  622913 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.243163  622913 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:03:09.251840  622913 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 12:03:09.261376  622913 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 12:03:09.269241  622913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:03:09.408118  622913 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 12:03:09.582010  622913 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 12:03:09.582116  622913 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 12:03:09.586129  622913 start.go:564] Will wait 60s for crictl version
	I1213 12:03:09.586218  622913 ssh_runner.go:195] Run: which crictl
	I1213 12:03:09.589880  622913 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 12:03:09.617198  622913 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 12:03:09.617307  622913 ssh_runner.go:195] Run: crio --version
	I1213 12:03:09.648039  622913 ssh_runner.go:195] Run: crio --version
	I1213 12:03:09.680132  622913 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 12:03:09.683104  622913 cli_runner.go:164] Run: docker network inspect no-preload-307409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 12:03:09.699119  622913 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 12:03:09.703132  622913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:03:09.712888  622913 kubeadm.go:884] updating cluster {Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 12:03:09.713027  622913 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 12:03:09.713074  622913 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 12:03:09.749883  622913 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 12:03:09.749906  622913 cache_images.go:86] Images are preloaded, skipping loading
	I1213 12:03:09.749914  622913 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 12:03:09.750028  622913 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-307409 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 12:03:09.750104  622913 ssh_runner.go:195] Run: crio config
	I1213 12:03:09.812957  622913 cni.go:84] Creating CNI manager for ""
	I1213 12:03:09.812981  622913 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 12:03:09.813006  622913 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 12:03:09.813030  622913 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-307409 NodeName:no-preload-307409 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 12:03:09.813160  622913 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-307409"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 12:03:09.813240  622913 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 12:03:09.821482  622913 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 12:03:09.821552  622913 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 12:03:09.830108  622913 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 12:03:09.842772  622913 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 12:03:09.855539  622913 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1213 12:03:09.868438  622913 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 12:03:09.871940  622913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:03:09.881527  622913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:03:09.994807  622913 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:03:10.018299  622913 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409 for IP: 192.168.85.2
	I1213 12:03:10.018324  622913 certs.go:195] generating shared ca certs ...
	I1213 12:03:10.018341  622913 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:03:10.018485  622913 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 12:03:10.018546  622913 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 12:03:10.018560  622913 certs.go:257] generating profile certs ...
	I1213 12:03:10.018675  622913 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/client.key
	I1213 12:03:10.018739  622913 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key.a40dac7b
	I1213 12:03:10.018788  622913 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key
	I1213 12:03:10.018902  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 12:03:10.018945  622913 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 12:03:10.018958  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 12:03:10.018984  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 12:03:10.019011  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 12:03:10.019049  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 12:03:10.019107  622913 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 12:03:10.019800  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 12:03:10.070011  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 12:03:10.106991  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 12:03:10.124508  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 12:03:10.141854  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 12:03:10.159596  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 12:03:10.177143  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 12:03:10.193680  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/no-preload-307409/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 12:03:10.212540  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 12:03:10.230850  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 12:03:10.247982  622913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 12:03:10.265265  622913 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 12:03:10.280828  622913 ssh_runner.go:195] Run: openssl version
	I1213 12:03:10.287915  622913 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:03:10.295295  622913 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 12:03:10.302777  622913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:03:10.306712  622913 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:03:10.306788  622913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:03:10.347657  622913 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 12:03:10.355488  622913 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 12:03:10.362741  622913 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 12:03:10.370213  622913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 12:03:10.373963  622913 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 12:03:10.374024  622913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 12:03:10.415846  622913 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 12:03:10.423114  622913 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 12:03:10.430238  622913 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 12:03:10.437700  622913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 12:03:10.441526  622913 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 12:03:10.441626  622913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 12:03:10.482660  622913 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 12:03:10.490193  622913 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 12:03:10.493922  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 12:03:10.537559  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 12:03:10.580339  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 12:03:10.624474  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 12:03:10.668005  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 12:03:10.719243  622913 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 12:03:10.787031  622913 kubeadm.go:401] StartCluster: {Name:no-preload-307409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-307409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:03:10.787127  622913 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 12:03:10.787194  622913 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 12:03:10.866441  622913 cri.go:89] found id: ""
	I1213 12:03:10.866517  622913 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 12:03:10.878947  622913 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 12:03:10.878971  622913 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 12:03:10.879029  622913 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 12:03:10.887787  622913 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 12:03:10.888361  622913 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-307409" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:03:10.888611  622913 kubeconfig.go:62] /home/jenkins/minikube-integration/22127-354468/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-307409" cluster setting kubeconfig missing "no-preload-307409" context setting]
	I1213 12:03:10.889058  622913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:03:10.890426  622913 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 12:03:10.898823  622913 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 12:03:10.898859  622913 kubeadm.go:602] duration metric: took 19.881679ms to restartPrimaryControlPlane
	I1213 12:03:10.898869  622913 kubeadm.go:403] duration metric: took 111.848044ms to StartCluster
	I1213 12:03:10.898903  622913 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:03:10.899000  622913 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:03:10.900707  622913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:03:10.900965  622913 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 12:03:10.901208  622913 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:03:10.901250  622913 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 12:03:10.901316  622913 addons.go:70] Setting storage-provisioner=true in profile "no-preload-307409"
	I1213 12:03:10.901329  622913 addons.go:239] Setting addon storage-provisioner=true in "no-preload-307409"
	I1213 12:03:10.901354  622913 host.go:66] Checking if "no-preload-307409" exists ...
	I1213 12:03:10.901796  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:10.902330  622913 addons.go:70] Setting dashboard=true in profile "no-preload-307409"
	I1213 12:03:10.902349  622913 addons.go:239] Setting addon dashboard=true in "no-preload-307409"
	W1213 12:03:10.902356  622913 addons.go:248] addon dashboard should already be in state true
	I1213 12:03:10.902383  622913 host.go:66] Checking if "no-preload-307409" exists ...
	I1213 12:03:10.902788  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:10.906749  622913 addons.go:70] Setting default-storageclass=true in profile "no-preload-307409"
	I1213 12:03:10.907002  622913 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-307409"
	I1213 12:03:10.907925  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:10.908085  622913 out.go:179] * Verifying Kubernetes components...
	I1213 12:03:10.911613  622913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:03:10.936135  622913 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 12:03:10.936200  622913 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 12:03:10.939926  622913 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 12:03:10.940040  622913 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:10.940057  622913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 12:03:10.940121  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:10.942800  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 12:03:10.942825  622913 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 12:03:10.942890  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:10.947265  622913 addons.go:239] Setting addon default-storageclass=true in "no-preload-307409"
	I1213 12:03:10.947306  622913 host.go:66] Checking if "no-preload-307409" exists ...
	I1213 12:03:10.947819  622913 cli_runner.go:164] Run: docker container inspect no-preload-307409 --format={{.State.Status}}
	I1213 12:03:11.005750  622913 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 12:03:11.005772  622913 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 12:03:11.005782  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:11.005838  622913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-307409
	I1213 12:03:11.023641  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:11.041145  622913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/no-preload-307409/id_rsa Username:docker}
	I1213 12:03:11.111003  622913 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:03:11.173593  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:11.173636  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 12:03:11.173654  622913 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 12:03:11.188163  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 12:03:11.188185  622913 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 12:03:11.213443  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 12:03:11.213508  622913 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 12:03:11.227236  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 12:03:11.230811  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 12:03:11.230883  622913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 12:03:11.251133  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 12:03:11.251205  622913 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 12:03:11.292200  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 12:03:11.292226  622913 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 12:03:11.305259  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 12:03:11.305283  622913 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 12:03:11.318210  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 12:03:11.318236  622913 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 12:03:11.331855  622913 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:03:11.331882  622913 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 12:03:11.346399  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:11.535442  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:11.535581  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.535629  622913 retry.go:31] will retry after 290.823808ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.535633  622913 retry.go:31] will retry after 252.781045ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.535694  622913 node_ready.go:35] waiting up to 6m0s for node "no-preload-307409" to be "Ready" ...
	W1213 12:03:11.536032  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.536057  622913 retry.go:31] will retry after 294.061208ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.788663  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:11.827131  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1213 12:03:11.830443  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:11.858572  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.858608  622913 retry.go:31] will retry after 534.111043ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:11.903268  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.903302  622913 retry.go:31] will retry after 517.641227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:11.928403  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:11.928440  622913 retry.go:31] will retry after 261.246628ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.190196  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:12.253861  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.253905  622913 retry.go:31] will retry after 750.097801ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.392854  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:12.421390  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:12.466046  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.466119  622913 retry.go:31] will retry after 345.117349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:12.494512  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.494543  622913 retry.go:31] will retry after 582.433152ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.811477  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:12.872208  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:12.872254  622913 retry.go:31] will retry after 1.066115266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.004542  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:03:09.273871  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:09.773704  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:10.273974  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:10.773144  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:11.273093  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:11.773168  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:12.273119  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:12.773938  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:13.274064  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:13.285062  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:13.346306  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.346338  620795 retry.go:31] will retry after 9.878335415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.773923  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:13.077848  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:13.142906  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.142942  622913 retry.go:31] will retry after 477.26404ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:13.177073  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.177107  622913 retry.go:31] will retry after 558.594273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:13.536929  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:13.621309  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:13.684925  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.684962  622913 retry.go:31] will retry after 887.0827ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.735891  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:13.838454  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.838488  622913 retry.go:31] will retry after 1.840863262s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.938866  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:13.997740  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:13.997780  622913 retry.go:31] will retry after 1.50758238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:14.572279  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:14.649792  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:14.649830  622913 retry.go:31] will retry after 2.273525411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:15.505555  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:15.537094  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:15.566161  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:15.566200  622913 retry.go:31] will retry after 1.268984334s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:15.680410  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:15.739773  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:15.739804  622913 retry.go:31] will retry after 2.516127735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.835378  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:16.919361  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.919396  622913 retry.go:31] will retry after 2.060639493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.923603  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:16.987685  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.987717  622913 retry.go:31] will retry after 3.014723999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:18.037172  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:14.273845  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:14.773934  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:15.273954  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:15.774017  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:16.273243  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:16.748013  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 12:03:16.773600  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:16.899498  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:16.899555  620795 retry.go:31] will retry after 7.173965376s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:17.273146  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:17.504219  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:17.614341  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:17.614369  620795 retry.go:31] will retry after 8.805046452s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:17.773767  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:18.273931  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:18.773442  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:18.256769  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:18.385179  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:18.385215  622913 retry.go:31] will retry after 1.545787463s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:18.980290  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:19.083283  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:19.083326  622913 retry.go:31] will retry after 3.363160165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:19.931900  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:19.994541  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:19.994572  622913 retry.go:31] will retry after 3.448577935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:20.003109  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:20.075345  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:20.075383  622913 retry.go:31] will retry after 2.247696448s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:20.536209  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:22.323733  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:22.390042  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:22.390078  622913 retry.go:31] will retry after 4.701837343s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:22.447431  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:22.510069  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:22.510101  622913 retry.go:31] will retry after 8.996063036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:22.536655  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:19.273647  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:19.773235  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:20.273783  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:20.774109  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:21.273100  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:21.774041  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:22.273187  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:22.773919  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:23.224947  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:23.273354  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:23.287102  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:23.287132  620795 retry.go:31] will retry after 17.975754277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:23.774029  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:24.073794  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:24.135298  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:24.135337  620795 retry.go:31] will retry after 17.719019377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:23.443398  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:23.501606  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:23.501640  622913 retry.go:31] will retry after 3.90534406s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:24.537114  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:27.036285  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:27.092481  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:27.162031  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:27.162065  622913 retry.go:31] will retry after 11.355394108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:27.407221  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:27.478522  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:27.478557  622913 retry.go:31] will retry after 8.009668822s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:24.273481  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:24.773666  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:25.273142  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:25.773170  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:26.273652  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:26.420263  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:26.478183  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:26.478224  620795 retry.go:31] will retry after 20.903659468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:26.773685  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:27.273113  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:27.773126  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:28.273297  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:28.773524  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:29.537044  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:31.506350  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:31.537137  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:31.567063  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:31.567101  622913 retry.go:31] will retry after 5.348365924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:29.273854  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:29.773973  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:30.273040  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:30.773142  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:31.273258  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:31.773723  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:32.274053  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:32.774024  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:33.273125  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:33.773200  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:33.537277  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:35.488997  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:35.615701  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:35.615734  622913 retry.go:31] will retry after 18.593547057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:36.036633  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:36.916463  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:36.985838  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:36.985870  622913 retry.go:31] will retry after 7.879856322s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:34.273224  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:34.773126  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:35.273423  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:35.773837  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:36.273251  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:36.773088  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:37.273142  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:37.773099  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:38.273954  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:38.773678  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:38.518385  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:38.536542  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:38.629558  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:38.629596  622913 retry.go:31] will retry after 11.083764817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:40.537112  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:43.037066  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:39.273565  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:39.773916  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:40.274028  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:40.773120  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:41.263107  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:03:41.273658  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1213 12:03:41.328103  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:41.328152  620795 retry.go:31] will retry after 24.557962123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:41.773949  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:41.855229  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:41.913722  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:41.913758  620795 retry.go:31] will retry after 29.657634591s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:42.273168  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:42.773137  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:43.273064  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:43.773040  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:44.866836  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:44.926788  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:44.926822  622913 retry.go:31] will retry after 12.537177434s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:45.536544  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:47.537056  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:44.273531  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:44.773694  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:45.273864  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:45.773153  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:46.273336  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:46.773222  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:47.273977  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:47.382145  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:47.444684  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:47.444761  620795 retry.go:31] will retry after 14.939941469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:47.773125  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:48.273113  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:48.773715  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:49.714461  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:03:49.810126  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:49.810163  622913 retry.go:31] will retry after 17.034686012s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:50.037110  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:52.537099  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:49.274132  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:49.773105  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:50.273278  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:50.773375  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:51.273108  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:51.773957  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:52.273086  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:52.773220  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:53.273134  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:53.773528  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:54.210466  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:03:54.276658  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:54.276693  622913 retry.go:31] will retry after 15.477790737s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:03:55.037124  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:03:57.464704  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:03:57.536423  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:03:57.546896  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:57.546941  622913 retry.go:31] will retry after 45.136010492s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:03:54.273748  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:54.773661  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:55.273945  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:55.773185  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:56.273156  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:56.773921  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:03:57.273352  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:03:57.273425  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:03:57.360759  620795 cri.go:89] found id: ""
	I1213 12:03:57.360784  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.360793  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:03:57.360799  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:03:57.360899  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:03:57.386673  620795 cri.go:89] found id: ""
	I1213 12:03:57.386699  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.386709  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:03:57.386715  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:03:57.386772  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:03:57.412179  620795 cri.go:89] found id: ""
	I1213 12:03:57.412202  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.412211  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:03:57.412217  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:03:57.412275  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:03:57.440758  620795 cri.go:89] found id: ""
	I1213 12:03:57.440782  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.440791  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:03:57.440797  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:03:57.440863  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:03:57.474164  620795 cri.go:89] found id: ""
	I1213 12:03:57.474189  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.474198  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:03:57.474205  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:03:57.474266  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:03:57.513790  620795 cri.go:89] found id: ""
	I1213 12:03:57.513811  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.513820  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:03:57.513826  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:03:57.513882  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:03:57.549685  620795 cri.go:89] found id: ""
	I1213 12:03:57.549708  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.549716  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:03:57.549723  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:03:57.549784  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:03:57.575809  620795 cri.go:89] found id: ""
	I1213 12:03:57.575830  620795 logs.go:282] 0 containers: []
	W1213 12:03:57.575839  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:03:57.575848  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:03:57.575860  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:03:57.645191  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:03:57.645229  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:03:57.662016  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:03:57.662048  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:03:57.724395  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:03:57.715919    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.716483    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.718246    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.718931    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.720750    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:03:57.715919    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.716483    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.718246    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.718931    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:03:57.720750    1918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:03:57.724433  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:03:57.724446  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:03:57.752976  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:03:57.753012  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:04:00.036301  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:02.037075  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:00.282268  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:00.369064  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:00.369151  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:00.446224  620795 cri.go:89] found id: ""
	I1213 12:04:00.446257  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.446267  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:00.446274  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:00.446398  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:00.492701  620795 cri.go:89] found id: ""
	I1213 12:04:00.492728  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.492737  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:00.492744  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:00.492814  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:00.537493  620795 cri.go:89] found id: ""
	I1213 12:04:00.537573  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.537600  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:00.537617  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:00.537703  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:00.567417  620795 cri.go:89] found id: ""
	I1213 12:04:00.567457  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.567467  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:00.567493  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:00.567660  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:00.597259  620795 cri.go:89] found id: ""
	I1213 12:04:00.597333  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.597358  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:00.597371  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:00.597453  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:00.624935  620795 cri.go:89] found id: ""
	I1213 12:04:00.625008  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.625032  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:00.625053  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:00.625125  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:00.656802  620795 cri.go:89] found id: ""
	I1213 12:04:00.656830  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.656846  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:00.656853  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:00.656924  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:00.684243  620795 cri.go:89] found id: ""
	I1213 12:04:00.684318  620795 logs.go:282] 0 containers: []
	W1213 12:04:00.684342  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:00.684364  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:00.684406  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:00.755205  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:00.755244  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:00.772314  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:00.772345  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:00.841157  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:00.832743    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.833321    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.835282    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.835830    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.836909    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:00.832743    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.833321    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.835282    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.835830    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:00.836909    2028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:00.841236  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:00.841257  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:00.870321  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:00.870357  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:02.384998  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:04:02.445321  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:02.445354  620795 retry.go:31] will retry after 47.283712675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:03.403559  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:03.414405  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:03.414472  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:03.440207  620795 cri.go:89] found id: ""
	I1213 12:04:03.440275  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.440299  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:03.440320  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:03.440406  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:03.473860  620795 cri.go:89] found id: ""
	I1213 12:04:03.473906  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.473916  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:03.473923  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:03.474005  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:03.500069  620795 cri.go:89] found id: ""
	I1213 12:04:03.500102  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.500111  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:03.500118  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:03.500194  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:03.550253  620795 cri.go:89] found id: ""
	I1213 12:04:03.550329  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.550353  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:03.550372  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:03.550459  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:03.595628  620795 cri.go:89] found id: ""
	I1213 12:04:03.595713  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.595737  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:03.595757  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:03.595871  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:03.626718  620795 cri.go:89] found id: ""
	I1213 12:04:03.626796  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.626827  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:03.626849  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:03.626954  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:03.657254  620795 cri.go:89] found id: ""
	I1213 12:04:03.657281  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.657290  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:03.657297  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:03.657356  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:03.682193  620795 cri.go:89] found id: ""
	I1213 12:04:03.682268  620795 logs.go:282] 0 containers: []
	W1213 12:04:03.682292  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:03.682315  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:03.682355  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:03.750002  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:03.741882    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.742330    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.743987    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.744602    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.746402    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:03.741882    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.742330    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.743987    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.744602    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:03.746402    2141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:03.750025  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:03.750039  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:03.779008  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:03.779046  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:03.807344  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:03.807424  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:03.879158  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:03.879201  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:04:04.537094  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:06.845581  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:04:06.913058  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:06.913091  622913 retry.go:31] will retry after 30.701510805s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:07.036960  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:05.886355  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:04:05.944754  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:05.944842  620795 retry.go:31] will retry after 33.803790372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:06.397350  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:06.407918  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:06.407990  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:06.436013  620795 cri.go:89] found id: ""
	I1213 12:04:06.436040  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.436049  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:06.436056  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:06.436121  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:06.462051  620795 cri.go:89] found id: ""
	I1213 12:04:06.462074  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.462083  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:06.462089  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:06.462147  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:06.487916  620795 cri.go:89] found id: ""
	I1213 12:04:06.487943  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.487952  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:06.487959  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:06.488027  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:06.514150  620795 cri.go:89] found id: ""
	I1213 12:04:06.514181  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.514190  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:06.514196  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:06.514255  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:06.567862  620795 cri.go:89] found id: ""
	I1213 12:04:06.567900  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.567910  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:06.567917  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:06.567977  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:06.615399  620795 cri.go:89] found id: ""
	I1213 12:04:06.615428  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.615446  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:06.615453  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:06.615546  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:06.645078  620795 cri.go:89] found id: ""
	I1213 12:04:06.645150  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.645174  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:06.645196  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:06.645278  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:06.673976  620795 cri.go:89] found id: ""
	I1213 12:04:06.674002  620795 logs.go:282] 0 containers: []
	W1213 12:04:06.674011  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:06.674022  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:06.674067  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:06.703467  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:06.703504  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:06.731693  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:06.731721  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:06.801110  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:06.801154  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:06.817774  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:06.817804  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:06.899087  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:06.890513    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.891812    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.893652    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.893965    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.895397    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:06.890513    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.891812    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.893652    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.893965    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:06.895397    2280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:04:09.536141  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:09.755504  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:04:09.840522  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:09.840549  622913 retry.go:31] will retry after 18.501787354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:11.536619  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:09.400132  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:09.410430  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:09.410500  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:09.440067  620795 cri.go:89] found id: ""
	I1213 12:04:09.440090  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.440100  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:09.440107  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:09.440167  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:09.470041  620795 cri.go:89] found id: ""
	I1213 12:04:09.470062  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.470071  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:09.470078  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:09.470135  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:09.496421  620795 cri.go:89] found id: ""
	I1213 12:04:09.496444  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.496453  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:09.496459  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:09.496516  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:09.535210  620795 cri.go:89] found id: ""
	I1213 12:04:09.535233  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.535241  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:09.535248  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:09.535322  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:09.593867  620795 cri.go:89] found id: ""
	I1213 12:04:09.593894  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.593905  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:09.593912  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:09.593967  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:09.633869  620795 cri.go:89] found id: ""
	I1213 12:04:09.633895  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.633904  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:09.633911  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:09.633967  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:09.660082  620795 cri.go:89] found id: ""
	I1213 12:04:09.660104  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.660113  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:09.660119  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:09.660180  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:09.686975  620795 cri.go:89] found id: ""
	I1213 12:04:09.687005  620795 logs.go:282] 0 containers: []
	W1213 12:04:09.687013  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:09.687023  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:09.687035  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:09.756960  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:09.756994  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:09.779895  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:09.779929  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:09.858208  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:09.850094    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.850752    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.852494    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.853050    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.854767    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:09.850094    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.850752    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.852494    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.853050    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:09.854767    2380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:09.858229  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:09.858243  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:09.886438  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:09.886472  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:11.571741  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:04:11.635299  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:11.635338  620795 retry.go:31] will retry after 28.848947099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1213 12:04:12.418247  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:12.428921  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:12.428996  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:12.453422  620795 cri.go:89] found id: ""
	I1213 12:04:12.453447  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.453455  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:12.453462  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:12.453523  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:12.482791  620795 cri.go:89] found id: ""
	I1213 12:04:12.482818  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.482827  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:12.482834  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:12.482892  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:12.509185  620795 cri.go:89] found id: ""
	I1213 12:04:12.509207  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.509216  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:12.509222  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:12.509281  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:12.555782  620795 cri.go:89] found id: ""
	I1213 12:04:12.555810  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.555820  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:12.555868  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:12.555953  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:12.609661  620795 cri.go:89] found id: ""
	I1213 12:04:12.609682  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.609691  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:12.609697  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:12.609753  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:12.636223  620795 cri.go:89] found id: ""
	I1213 12:04:12.636251  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.636268  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:12.636275  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:12.636335  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:12.663456  620795 cri.go:89] found id: ""
	I1213 12:04:12.663484  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.663493  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:12.663499  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:12.663583  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:12.688687  620795 cri.go:89] found id: ""
	I1213 12:04:12.688714  620795 logs.go:282] 0 containers: []
	W1213 12:04:12.688723  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:12.688733  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:12.688745  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:12.705209  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:12.705240  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:12.766977  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:12.758035    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.758936    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.760623    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.761225    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.762917    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:12.758035    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.758936    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.760623    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.761225    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:12.762917    2495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:12.767041  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:12.767064  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:12.795358  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:12.795396  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:12.823112  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:12.823143  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:04:14.037178  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:16.536405  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:15.388432  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:15.398781  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:15.398905  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:15.425880  620795 cri.go:89] found id: ""
	I1213 12:04:15.425920  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.425929  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:15.425935  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:15.426005  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:15.451424  620795 cri.go:89] found id: ""
	I1213 12:04:15.451467  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.451477  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:15.451486  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:15.451583  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:15.476481  620795 cri.go:89] found id: ""
	I1213 12:04:15.476525  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.476534  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:15.476541  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:15.476612  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:15.502062  620795 cri.go:89] found id: ""
	I1213 12:04:15.502088  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.502097  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:15.502104  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:15.502173  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:15.588057  620795 cri.go:89] found id: ""
	I1213 12:04:15.588132  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.588155  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:15.588175  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:15.588279  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:15.616479  620795 cri.go:89] found id: ""
	I1213 12:04:15.616506  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.616519  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:15.616526  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:15.616602  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:15.649712  620795 cri.go:89] found id: ""
	I1213 12:04:15.649789  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.649813  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:15.649827  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:15.649912  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:15.675926  620795 cri.go:89] found id: ""
	I1213 12:04:15.675995  620795 logs.go:282] 0 containers: []
	W1213 12:04:15.676019  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:15.676034  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:15.676049  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:15.692725  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:15.692755  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:15.759900  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:15.751635    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.752539    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.754270    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.754749    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.756378    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:15.751635    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.752539    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.754270    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.754749    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:15.756378    2608 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:15.759963  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:15.759989  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:15.789315  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:15.789425  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:15.818647  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:15.818675  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:18.385812  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:18.396389  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:18.396461  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:18.422777  620795 cri.go:89] found id: ""
	I1213 12:04:18.422800  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.422808  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:18.422814  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:18.422873  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:18.448579  620795 cri.go:89] found id: ""
	I1213 12:04:18.448607  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.448616  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:18.448622  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:18.448677  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:18.474629  620795 cri.go:89] found id: ""
	I1213 12:04:18.474707  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.474744  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:18.474768  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:18.474859  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:18.499793  620795 cri.go:89] found id: ""
	I1213 12:04:18.499819  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.499828  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:18.499837  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:18.499894  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:18.531333  620795 cri.go:89] found id: ""
	I1213 12:04:18.531368  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.531377  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:18.531383  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:18.531450  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:18.583893  620795 cri.go:89] found id: ""
	I1213 12:04:18.583923  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.583932  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:18.583939  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:18.584008  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:18.620082  620795 cri.go:89] found id: ""
	I1213 12:04:18.620120  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.620129  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:18.620135  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:18.620210  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:18.647112  620795 cri.go:89] found id: ""
	I1213 12:04:18.647137  620795 logs.go:282] 0 containers: []
	W1213 12:04:18.647145  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:18.647155  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:18.647167  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:18.712791  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:18.712833  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:18.728892  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:18.728920  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:18.793078  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:18.784898    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.785594    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.787226    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.787863    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.789553    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:18.784898    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.785594    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.787226    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.787863    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:18.789553    2722 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:18.793150  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:18.793172  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:18.821911  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:18.821947  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:04:18.537035  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:20.537076  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:23.036959  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:21.353995  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:21.364153  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:21.364265  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:21.389593  620795 cri.go:89] found id: ""
	I1213 12:04:21.389673  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.389690  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:21.389698  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:21.389773  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:21.418684  620795 cri.go:89] found id: ""
	I1213 12:04:21.418706  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.418715  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:21.418722  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:21.418778  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:21.442724  620795 cri.go:89] found id: ""
	I1213 12:04:21.442799  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.442822  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:21.442841  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:21.442927  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:21.472117  620795 cri.go:89] found id: ""
	I1213 12:04:21.472141  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.472150  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:21.472156  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:21.472213  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:21.501589  620795 cri.go:89] found id: ""
	I1213 12:04:21.501612  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.501621  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:21.501627  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:21.501688  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:21.563954  620795 cri.go:89] found id: ""
	I1213 12:04:21.564023  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.564046  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:21.564069  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:21.564151  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:21.612229  620795 cri.go:89] found id: ""
	I1213 12:04:21.612263  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.612273  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:21.612280  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:21.612339  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:21.639602  620795 cri.go:89] found id: ""
	I1213 12:04:21.639636  620795 logs.go:282] 0 containers: []
	W1213 12:04:21.639645  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:21.639655  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:21.639669  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:21.705516  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:21.705552  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:21.722491  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:21.722521  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:21.783641  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:21.775744    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.776319    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.777813    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.778191    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.779744    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:21.775744    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.776319    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.777813    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.778191    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:21.779744    2835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:21.783663  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:21.783676  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:21.811307  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:21.811340  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:04:25.037157  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:27.037243  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:24.340508  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:24.351403  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:24.351482  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:24.382302  620795 cri.go:89] found id: ""
	I1213 12:04:24.382379  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.382404  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:24.382425  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:24.382538  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:24.408839  620795 cri.go:89] found id: ""
	I1213 12:04:24.408862  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.408871  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:24.408878  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:24.408936  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:24.435623  620795 cri.go:89] found id: ""
	I1213 12:04:24.435651  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.435661  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:24.435667  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:24.435727  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:24.461121  620795 cri.go:89] found id: ""
	I1213 12:04:24.461149  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.461158  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:24.461165  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:24.461251  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:24.486111  620795 cri.go:89] found id: ""
	I1213 12:04:24.486144  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.486153  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:24.486176  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:24.486257  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:24.511493  620795 cri.go:89] found id: ""
	I1213 12:04:24.511567  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.511578  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:24.511585  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:24.511646  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:24.546004  620795 cri.go:89] found id: ""
	I1213 12:04:24.546029  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.546052  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:24.546059  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:24.546129  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:24.573601  620795 cri.go:89] found id: ""
	I1213 12:04:24.573677  620795 logs.go:282] 0 containers: []
	W1213 12:04:24.573699  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:24.573720  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:24.573758  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:24.651738  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:24.651779  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:24.669002  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:24.669035  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:24.734744  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:24.726695    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.727312    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.729032    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.729495    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.731022    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:24.726695    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.727312    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.729032    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.729495    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:24.731022    2947 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:24.734767  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:24.734780  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:24.763652  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:24.763687  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:27.296287  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:27.306558  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:27.306632  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:27.331288  620795 cri.go:89] found id: ""
	I1213 12:04:27.331315  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.331324  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:27.331331  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:27.331388  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:27.357587  620795 cri.go:89] found id: ""
	I1213 12:04:27.357611  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.357620  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:27.357626  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:27.357681  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:27.383604  620795 cri.go:89] found id: ""
	I1213 12:04:27.383628  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.383637  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:27.383644  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:27.383699  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:27.408104  620795 cri.go:89] found id: ""
	I1213 12:04:27.408183  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.408199  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:27.408207  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:27.408273  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:27.434284  620795 cri.go:89] found id: ""
	I1213 12:04:27.434309  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.434318  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:27.434325  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:27.434389  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:27.459356  620795 cri.go:89] found id: ""
	I1213 12:04:27.459382  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.459391  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:27.459399  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:27.459457  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:27.484476  620795 cri.go:89] found id: ""
	I1213 12:04:27.484543  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.484558  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:27.484565  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:27.484630  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:27.510910  620795 cri.go:89] found id: ""
	I1213 12:04:27.510937  620795 logs.go:282] 0 containers: []
	W1213 12:04:27.510946  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:27.510955  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:27.510967  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:27.543054  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:27.543085  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:27.641750  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:27.634259    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.634796    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.636509    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.637087    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.638180    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:27.634259    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.634796    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.636509    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.637087    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:27.638180    3058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:27.641818  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:27.641838  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:27.671375  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:27.671412  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:27.701704  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:27.701735  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:28.342721  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:04:28.405775  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:28.405881  622913 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 12:04:29.536294  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:31.536581  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:30.268871  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:30.279472  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:30.279561  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:30.305479  620795 cri.go:89] found id: ""
	I1213 12:04:30.305504  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.305513  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:30.305520  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:30.305577  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:30.330879  620795 cri.go:89] found id: ""
	I1213 12:04:30.330904  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.330914  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:30.330920  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:30.330978  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:30.358794  620795 cri.go:89] found id: ""
	I1213 12:04:30.358821  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.358830  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:30.358837  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:30.358899  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:30.384574  620795 cri.go:89] found id: ""
	I1213 12:04:30.384648  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.384662  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:30.384669  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:30.384728  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:30.409348  620795 cri.go:89] found id: ""
	I1213 12:04:30.409374  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.409383  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:30.409390  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:30.409460  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:30.435261  620795 cri.go:89] found id: ""
	I1213 12:04:30.435286  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.435295  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:30.435302  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:30.435357  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:30.459810  620795 cri.go:89] found id: ""
	I1213 12:04:30.459834  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.459843  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:30.459849  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:30.459906  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:30.485697  620795 cri.go:89] found id: ""
	I1213 12:04:30.485720  620795 logs.go:282] 0 containers: []
	W1213 12:04:30.485728  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:30.485738  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:30.485749  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:30.513499  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:30.513534  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:30.574739  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:30.574767  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:30.658042  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:30.658078  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:30.678263  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:30.678291  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:30.741695  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:30.733736    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.734524    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.736026    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.736488    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.737955    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:30.733736    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.734524    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.736026    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.736488    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:30.737955    3185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:33.242096  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:33.253053  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:33.253146  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:33.279722  620795 cri.go:89] found id: ""
	I1213 12:04:33.279748  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.279756  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:33.279764  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:33.279820  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:33.306092  620795 cri.go:89] found id: ""
	I1213 12:04:33.306129  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.306139  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:33.306163  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:33.306252  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:33.332772  620795 cri.go:89] found id: ""
	I1213 12:04:33.332796  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.332813  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:33.332819  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:33.332882  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:33.367716  620795 cri.go:89] found id: ""
	I1213 12:04:33.367744  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.367754  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:33.367760  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:33.367822  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:33.400175  620795 cri.go:89] found id: ""
	I1213 12:04:33.400242  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.400258  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:33.400266  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:33.400325  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:33.424852  620795 cri.go:89] found id: ""
	I1213 12:04:33.424877  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.424887  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:33.424894  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:33.424984  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:33.453556  620795 cri.go:89] found id: ""
	I1213 12:04:33.453581  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.453590  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:33.453597  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:33.453653  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:33.479131  620795 cri.go:89] found id: ""
	I1213 12:04:33.479156  620795 logs.go:282] 0 containers: []
	W1213 12:04:33.479165  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:33.479175  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:33.479187  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:33.549906  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:33.550637  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:33.572706  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:33.572863  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:33.662497  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:33.653770    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.654281    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.656228    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.656866    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.658492    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:33.653770    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.654281    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.656228    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.656866    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:33.658492    3283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:33.662522  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:33.662535  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:33.692067  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:33.692111  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:04:33.536622  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:36.036352  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:37.615506  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:04:37.688522  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:37.688627  622913 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1213 12:04:38.037102  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:36.220187  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:36.230829  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:36.230906  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:36.260247  620795 cri.go:89] found id: ""
	I1213 12:04:36.260271  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.260280  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:36.260286  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:36.260342  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:36.285940  620795 cri.go:89] found id: ""
	I1213 12:04:36.285973  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.285982  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:36.285988  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:36.286059  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:36.311531  620795 cri.go:89] found id: ""
	I1213 12:04:36.311553  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.311561  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:36.311568  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:36.311633  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:36.336755  620795 cri.go:89] found id: ""
	I1213 12:04:36.336849  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.336865  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:36.336873  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:36.336933  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:36.361652  620795 cri.go:89] found id: ""
	I1213 12:04:36.361676  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.361684  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:36.361690  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:36.361748  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:36.392507  620795 cri.go:89] found id: ""
	I1213 12:04:36.392530  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.392539  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:36.392545  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:36.392601  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:36.418503  620795 cri.go:89] found id: ""
	I1213 12:04:36.418526  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.418535  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:36.418540  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:36.418614  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:36.444832  620795 cri.go:89] found id: ""
	I1213 12:04:36.444856  620795 logs.go:282] 0 containers: []
	W1213 12:04:36.444865  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:36.444874  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:36.444891  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:36.515523  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:36.515566  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:36.535671  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:36.535699  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:36.655383  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:36.646224    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.647083    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.648816    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.649375    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.651021    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:36.646224    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.647083    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.648816    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.649375    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:36.651021    3399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:36.655406  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:36.655421  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:36.684176  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:36.684212  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:39.215366  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:39.225843  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:39.225914  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	W1213 12:04:40.037338  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:42.538150  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:42.683554  622913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:04:42.744769  622913 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:42.744869  622913 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 12:04:42.747993  622913 out.go:179] * Enabled addons: 
	I1213 12:04:42.750740  622913 addons.go:530] duration metric: took 1m31.849485278s for enable addons: enabled=[]
	I1213 12:04:39.251825  620795 cri.go:89] found id: ""
	I1213 12:04:39.251850  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.251860  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:39.251867  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:39.251927  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:39.280966  620795 cri.go:89] found id: ""
	I1213 12:04:39.280991  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.281000  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:39.281007  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:39.281063  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:39.305488  620795 cri.go:89] found id: ""
	I1213 12:04:39.305511  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.305520  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:39.305526  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:39.305583  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:39.330461  620795 cri.go:89] found id: ""
	I1213 12:04:39.330484  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.330493  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:39.330500  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:39.330556  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:39.355410  620795 cri.go:89] found id: ""
	I1213 12:04:39.355483  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.355507  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:39.355565  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:39.355706  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:39.384890  620795 cri.go:89] found id: ""
	I1213 12:04:39.384916  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.384926  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:39.384933  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:39.385017  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:39.409735  620795 cri.go:89] found id: ""
	I1213 12:04:39.409758  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.409767  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:39.409773  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:39.409833  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:39.439648  620795 cri.go:89] found id: ""
	I1213 12:04:39.439673  620795 logs.go:282] 0 containers: []
	W1213 12:04:39.439685  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:39.439695  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:39.439706  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:39.505768  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:39.505803  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:39.525572  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:39.525602  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:39.624619  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:39.616542    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.617459    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.619080    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.619382    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.620943    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:39.616542    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.617459    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.619080    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.619382    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:39.620943    3514 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:39.624643  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:39.624656  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:39.653269  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:39.653306  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:39.749621  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1213 12:04:39.805957  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:39.806064  620795 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 12:04:40.484759  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1213 12:04:40.549677  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:40.549776  620795 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 12:04:42.182348  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:42.195718  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:42.195860  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:42.224999  620795 cri.go:89] found id: ""
	I1213 12:04:42.225044  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.225058  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:42.225067  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:42.225192  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:42.254835  620795 cri.go:89] found id: ""
	I1213 12:04:42.254913  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.254949  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:42.254975  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:42.255077  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:42.283814  620795 cri.go:89] found id: ""
	I1213 12:04:42.283889  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.283916  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:42.283931  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:42.284014  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:42.315795  620795 cri.go:89] found id: ""
	I1213 12:04:42.315823  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.315859  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:42.315871  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:42.315954  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:42.342987  620795 cri.go:89] found id: ""
	I1213 12:04:42.343026  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.343035  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:42.343042  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:42.343114  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:42.368935  620795 cri.go:89] found id: ""
	I1213 12:04:42.368969  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.368978  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:42.368986  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:42.369052  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:42.398633  620795 cri.go:89] found id: ""
	I1213 12:04:42.398703  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.398727  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:42.398747  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:42.398834  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:42.424223  620795 cri.go:89] found id: ""
	I1213 12:04:42.424299  620795 logs.go:282] 0 containers: []
	W1213 12:04:42.424324  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:42.424342  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:42.424367  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:42.453160  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:42.453198  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:42.486810  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:42.486840  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:42.567003  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:42.567043  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:42.606556  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:42.606591  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:42.678272  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:42.669759    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.670194    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.671849    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.672446    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.673383    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:42.669759    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.670194    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.671849    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.672446    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:42.673383    3652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:04:45.037213  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:47.536268  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:45.178582  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:45.193685  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:45.193792  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:45.236374  620795 cri.go:89] found id: ""
	I1213 12:04:45.236402  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.236411  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:45.236419  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:45.236487  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:45.279160  620795 cri.go:89] found id: ""
	I1213 12:04:45.279193  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.279203  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:45.279210  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:45.279281  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:45.308966  620795 cri.go:89] found id: ""
	I1213 12:04:45.308991  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.309000  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:45.309006  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:45.309065  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:45.337083  620795 cri.go:89] found id: ""
	I1213 12:04:45.337110  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.337119  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:45.337126  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:45.337212  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:45.366596  620795 cri.go:89] found id: ""
	I1213 12:04:45.366619  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.366628  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:45.366635  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:45.366694  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:45.391548  620795 cri.go:89] found id: ""
	I1213 12:04:45.391572  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.391581  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:45.391588  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:45.391649  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:45.418598  620795 cri.go:89] found id: ""
	I1213 12:04:45.418619  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.418628  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:45.418635  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:45.418700  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:45.448270  620795 cri.go:89] found id: ""
	I1213 12:04:45.448292  620795 logs.go:282] 0 containers: []
	W1213 12:04:45.448301  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:45.448310  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:45.448321  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:45.478882  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:45.478907  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:45.548829  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:45.548916  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:45.567213  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:45.567382  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:45.681775  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:45.673956    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.674517    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.676147    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.676639    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.678185    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:45.673956    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.674517    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.676147    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.676639    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:45.678185    3764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:45.681800  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:45.681816  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:48.211634  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:48.222293  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:48.222364  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:48.249683  620795 cri.go:89] found id: ""
	I1213 12:04:48.249707  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.249715  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:48.249722  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:48.249785  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:48.277977  620795 cri.go:89] found id: ""
	I1213 12:04:48.277999  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.278009  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:48.278015  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:48.278072  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:48.304052  620795 cri.go:89] found id: ""
	I1213 12:04:48.304080  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.304089  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:48.304096  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:48.304153  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:48.334039  620795 cri.go:89] found id: ""
	I1213 12:04:48.334066  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.334075  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:48.334087  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:48.334151  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:48.364623  620795 cri.go:89] found id: ""
	I1213 12:04:48.364646  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.364654  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:48.364661  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:48.364723  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:48.389613  620795 cri.go:89] found id: ""
	I1213 12:04:48.389684  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.389707  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:48.389718  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:48.389797  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:48.418439  620795 cri.go:89] found id: ""
	I1213 12:04:48.418467  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.418477  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:48.418485  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:48.418544  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:48.446312  620795 cri.go:89] found id: ""
	I1213 12:04:48.446341  620795 logs.go:282] 0 containers: []
	W1213 12:04:48.446350  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:48.446360  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:48.446372  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:48.463031  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:48.463116  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:48.558736  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:48.546104    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.546489    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.550180    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.550521    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.554948    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:48.546104    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.546489    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.550180    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.550521    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:48.554948    3856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:48.558767  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:48.558782  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:48.606808  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:48.606885  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:48.638169  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:48.638199  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:49.729332  620795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1213 12:04:49.791669  620795 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1213 12:04:49.791778  620795 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1213 12:04:49.794717  620795 out.go:179] * Enabled addons: 
	W1213 12:04:50.037029  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:52.037265  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:49.797659  620795 addons.go:530] duration metric: took 1m53.008142261s for enable addons: enabled=[]
	I1213 12:04:51.210580  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:51.221809  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:51.221877  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:51.247182  620795 cri.go:89] found id: ""
	I1213 12:04:51.247259  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.247282  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:51.247301  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:51.247396  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:51.275541  620795 cri.go:89] found id: ""
	I1213 12:04:51.275608  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.275623  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:51.275631  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:51.275695  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:51.300774  620795 cri.go:89] found id: ""
	I1213 12:04:51.300866  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.300889  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:51.300902  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:51.300973  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:51.330039  620795 cri.go:89] found id: ""
	I1213 12:04:51.330064  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.330074  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:51.330080  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:51.330152  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:51.358455  620795 cri.go:89] found id: ""
	I1213 12:04:51.358482  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.358491  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:51.358497  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:51.358556  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:51.387907  620795 cri.go:89] found id: ""
	I1213 12:04:51.387933  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.387942  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:51.387948  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:51.388011  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:51.414050  620795 cri.go:89] found id: ""
	I1213 12:04:51.414075  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.414084  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:51.414091  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:51.414148  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:51.440682  620795 cri.go:89] found id: ""
	I1213 12:04:51.440715  620795 logs.go:282] 0 containers: []
	W1213 12:04:51.440729  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:51.440739  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:51.440752  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:51.502275  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:51.494090    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.494838    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.496561    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.497152    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.498687    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:51.494090    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.494838    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.496561    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.497152    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:51.498687    3969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:51.502296  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:51.502308  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:51.533683  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:51.533722  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:51.590439  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:51.590468  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:51.668678  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:51.668719  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:54.186166  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:54.196649  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:54.196718  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:54.221630  620795 cri.go:89] found id: ""
	I1213 12:04:54.221656  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.221665  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:54.221672  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:54.221729  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	W1213 12:04:54.537026  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:04:56.537082  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:04:54.246332  620795 cri.go:89] found id: ""
	I1213 12:04:54.246354  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.246362  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:54.246368  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:54.246425  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:54.274363  620795 cri.go:89] found id: ""
	I1213 12:04:54.274385  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.274396  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:54.274405  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:54.274465  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:54.299013  620795 cri.go:89] found id: ""
	I1213 12:04:54.299036  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.299045  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:54.299051  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:54.299115  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:54.325098  620795 cri.go:89] found id: ""
	I1213 12:04:54.325123  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.325133  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:54.325140  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:54.325200  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:54.350290  620795 cri.go:89] found id: ""
	I1213 12:04:54.350318  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.350327  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:54.350334  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:54.350394  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:54.377186  620795 cri.go:89] found id: ""
	I1213 12:04:54.377209  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.377218  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:54.377224  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:54.377283  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:54.409137  620795 cri.go:89] found id: ""
	I1213 12:04:54.409164  620795 logs.go:282] 0 containers: []
	W1213 12:04:54.409174  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:54.409184  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:54.409196  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:54.426177  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:54.426207  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:54.491873  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:54.483806    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.484379    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.486107    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.486575    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.488295    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:54.483806    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.484379    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.486107    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.486575    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:54.488295    4088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:54.491896  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:54.491909  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:04:54.521061  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:54.521153  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:54.580593  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:54.580623  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:57.166168  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:04:57.177178  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:04:57.177255  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:04:57.209135  620795 cri.go:89] found id: ""
	I1213 12:04:57.209170  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.209179  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:04:57.209186  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:04:57.209254  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:04:57.236323  620795 cri.go:89] found id: ""
	I1213 12:04:57.236359  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.236368  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:04:57.236375  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:04:57.236433  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:04:57.261970  620795 cri.go:89] found id: ""
	I1213 12:04:57.261992  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.262001  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:04:57.262007  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:04:57.262064  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:04:57.287149  620795 cri.go:89] found id: ""
	I1213 12:04:57.287171  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.287179  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:04:57.287186  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:04:57.287242  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:04:57.312282  620795 cri.go:89] found id: ""
	I1213 12:04:57.312307  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.312316  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:04:57.312322  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:04:57.312380  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:04:57.341454  620795 cri.go:89] found id: ""
	I1213 12:04:57.341480  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.341489  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:04:57.341496  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:04:57.341559  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:04:57.366694  620795 cri.go:89] found id: ""
	I1213 12:04:57.366718  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.366729  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:04:57.366736  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:04:57.366795  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:04:57.392434  620795 cri.go:89] found id: ""
	I1213 12:04:57.392459  620795 logs.go:282] 0 containers: []
	W1213 12:04:57.392468  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:04:57.392478  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:04:57.392490  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:04:57.426595  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:04:57.426622  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:04:57.490950  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:04:57.490984  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:04:57.508294  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:04:57.508326  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:04:57.637638  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:04:57.628307    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.629849    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.630282    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.632060    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.632815    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:04:57.628307    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.629849    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.630282    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.632060    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:04:57.632815    4215 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:04:57.637717  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:04:57.637746  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:04:59.037033  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:01.536339  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:00.166037  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:00.211490  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:00.212114  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:00.294178  620795 cri.go:89] found id: ""
	I1213 12:05:00.294201  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.294210  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:00.294217  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:00.294285  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:00.376480  620795 cri.go:89] found id: ""
	I1213 12:05:00.376506  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.376516  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:00.376523  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:00.376593  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:00.416213  620795 cri.go:89] found id: ""
	I1213 12:05:00.416240  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.416250  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:00.416261  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:00.416329  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:00.449590  620795 cri.go:89] found id: ""
	I1213 12:05:00.449620  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.449629  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:00.449637  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:00.449722  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:00.479461  620795 cri.go:89] found id: ""
	I1213 12:05:00.479486  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.479495  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:00.479502  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:00.479589  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:00.509094  620795 cri.go:89] found id: ""
	I1213 12:05:00.509123  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.509132  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:00.509138  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:00.509204  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:00.583923  620795 cri.go:89] found id: ""
	I1213 12:05:00.583952  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.583962  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:00.583969  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:00.584049  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:00.624268  620795 cri.go:89] found id: ""
	I1213 12:05:00.624299  620795 logs.go:282] 0 containers: []
	W1213 12:05:00.624309  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:00.624322  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:00.624334  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:00.701394  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:00.692593    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.693524    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.695465    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.695924    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.697491    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:00.692593    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.693524    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.695465    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.695924    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:00.697491    4317 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:00.701419  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:00.701432  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:00.730125  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:00.730170  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:00.760465  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:00.760494  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:00.826577  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:00.826619  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:03.345642  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:03.359010  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:03.359082  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:03.391792  620795 cri.go:89] found id: ""
	I1213 12:05:03.391816  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.391825  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:03.391832  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:03.391889  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:03.418730  620795 cri.go:89] found id: ""
	I1213 12:05:03.418759  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.418768  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:03.418774  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:03.418831  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:03.447034  620795 cri.go:89] found id: ""
	I1213 12:05:03.447062  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.447070  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:03.447077  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:03.447137  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:03.471737  620795 cri.go:89] found id: ""
	I1213 12:05:03.471763  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.471772  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:03.471778  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:03.471832  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:03.496618  620795 cri.go:89] found id: ""
	I1213 12:05:03.496641  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.496650  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:03.496656  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:03.496721  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:03.538834  620795 cri.go:89] found id: ""
	I1213 12:05:03.538855  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.538901  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:03.538915  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:03.539006  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:03.577353  620795 cri.go:89] found id: ""
	I1213 12:05:03.577375  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.577437  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:03.577445  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:03.577590  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:03.613163  620795 cri.go:89] found id: ""
	I1213 12:05:03.613234  620795 logs.go:282] 0 containers: []
	W1213 12:05:03.613247  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:03.613257  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:03.613296  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:03.652148  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:03.652174  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:03.718838  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:03.718879  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:03.736159  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:03.736189  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:03.801478  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:03.792834    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.793250    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.794944    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.795726    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.797245    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:03.792834    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.793250    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.794944    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.795726    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:03.797245    4448 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:03.801504  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:03.801519  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:05:03.537034  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:06.036238  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:08.037112  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:06.330711  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:06.341136  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:06.341246  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:06.366066  620795 cri.go:89] found id: ""
	I1213 12:05:06.366099  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.366108  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:06.366114  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:06.366178  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:06.394525  620795 cri.go:89] found id: ""
	I1213 12:05:06.394563  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.394573  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:06.394580  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:06.394649  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:06.424244  620795 cri.go:89] found id: ""
	I1213 12:05:06.424312  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.424336  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:06.424357  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:06.424449  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:06.450497  620795 cri.go:89] found id: ""
	I1213 12:05:06.450529  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.450538  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:06.450545  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:06.450614  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:06.475735  620795 cri.go:89] found id: ""
	I1213 12:05:06.475759  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.475768  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:06.475774  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:06.475835  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:06.501224  620795 cri.go:89] found id: ""
	I1213 12:05:06.501248  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.501257  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:06.501263  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:06.501322  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:06.548385  620795 cri.go:89] found id: ""
	I1213 12:05:06.548410  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.548419  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:06.548425  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:06.548498  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:06.613365  620795 cri.go:89] found id: ""
	I1213 12:05:06.613444  620795 logs.go:282] 0 containers: []
	W1213 12:05:06.613469  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:06.613490  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:06.613525  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:06.642036  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:06.642067  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:06.675194  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:06.675218  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:06.743889  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:06.743933  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:06.760968  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:06.761004  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:06.828998  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:06.821066    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.821670    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.823321    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.823818    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.825418    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:06.821066    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.821670    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.823321    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.823818    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:06.825418    4563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:05:10.037152  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:12.536415  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:09.329981  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:09.340577  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:09.340644  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:09.368902  620795 cri.go:89] found id: ""
	I1213 12:05:09.368926  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.368935  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:09.368941  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:09.369004  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:09.397232  620795 cri.go:89] found id: ""
	I1213 12:05:09.397263  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.397273  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:09.397280  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:09.397353  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:09.424425  620795 cri.go:89] found id: ""
	I1213 12:05:09.424455  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.424465  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:09.424471  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:09.424529  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:09.449435  620795 cri.go:89] found id: ""
	I1213 12:05:09.449457  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.449466  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:09.449472  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:09.449534  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:09.473489  620795 cri.go:89] found id: ""
	I1213 12:05:09.473512  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.473521  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:09.473527  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:09.473584  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:09.503533  620795 cri.go:89] found id: ""
	I1213 12:05:09.503560  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.503569  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:09.503576  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:09.503632  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:09.569217  620795 cri.go:89] found id: ""
	I1213 12:05:09.569286  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.569312  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:09.569331  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:09.569431  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:09.616563  620795 cri.go:89] found id: ""
	I1213 12:05:09.616632  620795 logs.go:282] 0 containers: []
	W1213 12:05:09.616663  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:09.616686  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:09.616726  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:09.645190  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:09.645217  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:09.710725  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:09.710760  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:09.727200  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:09.727231  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:09.793579  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:09.785934    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.786467    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.787974    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.788480    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.790090    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:09.785934    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.786467    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.787974    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.788480    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:09.790090    4674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:09.793611  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:09.793625  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:12.321617  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:12.332442  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:12.332517  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:12.357812  620795 cri.go:89] found id: ""
	I1213 12:05:12.357835  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.357844  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:12.357851  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:12.357912  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:12.383803  620795 cri.go:89] found id: ""
	I1213 12:05:12.383827  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.383836  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:12.383842  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:12.383902  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:12.408966  620795 cri.go:89] found id: ""
	I1213 12:05:12.409044  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.409061  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:12.409069  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:12.409183  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:12.438466  620795 cri.go:89] found id: ""
	I1213 12:05:12.438491  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.438499  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:12.438506  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:12.438562  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:12.468347  620795 cri.go:89] found id: ""
	I1213 12:05:12.468375  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.468385  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:12.468391  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:12.468455  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:12.493833  620795 cri.go:89] found id: ""
	I1213 12:05:12.493860  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.493869  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:12.493876  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:12.493936  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:12.540091  620795 cri.go:89] found id: ""
	I1213 12:05:12.540120  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.540130  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:12.540137  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:12.540202  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:12.593138  620795 cri.go:89] found id: ""
	I1213 12:05:12.593165  620795 logs.go:282] 0 containers: []
	W1213 12:05:12.593174  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:12.593184  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:12.593195  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:12.670751  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:12.670790  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:12.688162  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:12.688196  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:12.753953  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:12.745930    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.746540    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.748217    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.748692    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.750290    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:12.745930    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.746540    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.748217    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.748692    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:12.750290    4774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:12.753978  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:12.753990  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:12.782410  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:12.782447  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:05:14.537113  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:17.037129  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:15.314766  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:15.325177  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:15.325244  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:15.350233  620795 cri.go:89] found id: ""
	I1213 12:05:15.350259  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.350269  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:15.350276  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:15.350332  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:15.375095  620795 cri.go:89] found id: ""
	I1213 12:05:15.375121  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.375131  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:15.375138  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:15.375198  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:15.400509  620795 cri.go:89] found id: ""
	I1213 12:05:15.400531  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.400539  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:15.400545  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:15.400604  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:15.429727  620795 cri.go:89] found id: ""
	I1213 12:05:15.429749  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.429758  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:15.429765  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:15.429818  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:15.455300  620795 cri.go:89] found id: ""
	I1213 12:05:15.455321  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.455330  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:15.455336  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:15.455393  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:15.480516  620795 cri.go:89] found id: ""
	I1213 12:05:15.480540  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.480549  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:15.480556  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:15.480617  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:15.508281  620795 cri.go:89] found id: ""
	I1213 12:05:15.508358  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.508375  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:15.508382  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:15.508453  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:15.569260  620795 cri.go:89] found id: ""
	I1213 12:05:15.569286  620795 logs.go:282] 0 containers: []
	W1213 12:05:15.569295  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:15.569304  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:15.569317  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:15.653590  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:15.653630  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:15.670770  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:15.670805  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:15.734152  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:15.725752    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.726494    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.728223    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.728860    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.730656    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:15.725752    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.726494    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.728223    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.728860    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:15.730656    4886 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:15.734221  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:15.734248  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:15.762906  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:15.762941  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:18.292789  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:18.303334  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:18.303410  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:18.329348  620795 cri.go:89] found id: ""
	I1213 12:05:18.329372  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.329382  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:18.329389  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:18.329455  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:18.358617  620795 cri.go:89] found id: ""
	I1213 12:05:18.358638  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.358647  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:18.358653  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:18.358710  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:18.383565  620795 cri.go:89] found id: ""
	I1213 12:05:18.383589  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.383597  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:18.383603  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:18.383666  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:18.409351  620795 cri.go:89] found id: ""
	I1213 12:05:18.409378  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.409387  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:18.409394  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:18.409456  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:18.435771  620795 cri.go:89] found id: ""
	I1213 12:05:18.435797  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.435806  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:18.435813  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:18.435875  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:18.464513  620795 cri.go:89] found id: ""
	I1213 12:05:18.464539  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.464549  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:18.464556  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:18.464659  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:18.490219  620795 cri.go:89] found id: ""
	I1213 12:05:18.490244  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.490252  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:18.490260  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:18.490317  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:18.532969  620795 cri.go:89] found id: ""
	I1213 12:05:18.532995  620795 logs.go:282] 0 containers: []
	W1213 12:05:18.533004  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:18.533013  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:18.533027  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:18.595123  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:18.595154  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:18.672161  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:18.672201  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:18.689194  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:18.689222  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:18.754503  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:18.745575    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.746298    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.748026    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.748666    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.750610    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:18.745575    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.746298    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.748026    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.748666    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:18.750610    5012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:18.754526  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:18.754539  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:05:19.537079  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:22.037194  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:21.283365  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:21.294092  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:21.294183  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:21.321526  620795 cri.go:89] found id: ""
	I1213 12:05:21.321549  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.321559  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:21.321565  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:21.321622  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:21.349919  620795 cri.go:89] found id: ""
	I1213 12:05:21.349943  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.349952  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:21.349958  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:21.350021  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:21.379881  620795 cri.go:89] found id: ""
	I1213 12:05:21.379906  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.379915  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:21.379922  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:21.379982  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:21.405656  620795 cri.go:89] found id: ""
	I1213 12:05:21.405679  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.405687  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:21.405694  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:21.405754  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:21.435716  620795 cri.go:89] found id: ""
	I1213 12:05:21.435752  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.435762  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:21.435769  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:21.435839  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:21.461176  620795 cri.go:89] found id: ""
	I1213 12:05:21.461199  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.461207  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:21.461214  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:21.461271  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:21.487321  620795 cri.go:89] found id: ""
	I1213 12:05:21.487357  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.487366  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:21.487372  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:21.487438  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:21.513663  620795 cri.go:89] found id: ""
	I1213 12:05:21.513687  620795 logs.go:282] 0 containers: []
	W1213 12:05:21.513696  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:21.513706  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:21.513740  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:21.547538  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:21.547713  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:21.648986  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:21.641895    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.642288    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.643954    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.644494    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.645453    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:21.641895    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.642288    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.643954    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.644494    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:21.645453    5114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:21.649007  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:21.649020  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:21.676895  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:21.676929  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:21.706237  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:21.706268  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:05:24.536202  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:26.537127  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:24.271406  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:24.281916  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:24.281984  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:24.306547  620795 cri.go:89] found id: ""
	I1213 12:05:24.306570  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.306579  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:24.306586  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:24.306645  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:24.334194  620795 cri.go:89] found id: ""
	I1213 12:05:24.334218  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.334227  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:24.334234  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:24.334291  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:24.360113  620795 cri.go:89] found id: ""
	I1213 12:05:24.360139  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.360148  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:24.360154  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:24.360219  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:24.385854  620795 cri.go:89] found id: ""
	I1213 12:05:24.385879  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.385889  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:24.385896  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:24.385960  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:24.411999  620795 cri.go:89] found id: ""
	I1213 12:05:24.412025  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.412034  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:24.412042  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:24.412102  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:24.438300  620795 cri.go:89] found id: ""
	I1213 12:05:24.438325  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.438335  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:24.438347  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:24.438405  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:24.464325  620795 cri.go:89] found id: ""
	I1213 12:05:24.464351  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.464361  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:24.464369  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:24.464430  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:24.491896  620795 cri.go:89] found id: ""
	I1213 12:05:24.491920  620795 logs.go:282] 0 containers: []
	W1213 12:05:24.491930  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:24.491939  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:24.491971  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:24.519363  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:24.519445  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:24.616473  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:24.616502  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:24.692608  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:24.692645  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:24.711650  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:24.711689  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:24.775602  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:24.767043    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.768309    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.769606    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.770273    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.771935    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:24.767043    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.768309    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.769606    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.770273    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:24.771935    5240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:27.275849  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:27.286597  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:27.286680  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:27.311787  620795 cri.go:89] found id: ""
	I1213 12:05:27.311813  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.311822  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:27.311829  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:27.311893  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:27.341056  620795 cri.go:89] found id: ""
	I1213 12:05:27.341123  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.341146  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:27.341160  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:27.341233  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:27.365944  620795 cri.go:89] found id: ""
	I1213 12:05:27.365978  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.365986  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:27.365993  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:27.366057  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:27.390576  620795 cri.go:89] found id: ""
	I1213 12:05:27.390611  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.390626  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:27.390633  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:27.390702  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:27.420415  620795 cri.go:89] found id: ""
	I1213 12:05:27.420439  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.420448  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:27.420454  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:27.420516  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:27.445745  620795 cri.go:89] found id: ""
	I1213 12:05:27.445812  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.445835  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:27.445853  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:27.445936  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:27.475470  620795 cri.go:89] found id: ""
	I1213 12:05:27.475508  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.475538  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:27.475547  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:27.475615  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:27.502195  620795 cri.go:89] found id: ""
	I1213 12:05:27.502222  620795 logs.go:282] 0 containers: []
	W1213 12:05:27.502231  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:27.502240  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:27.502252  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:27.597636  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:27.597744  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:27.629736  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:27.629763  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:27.694305  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:27.686679    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.687417    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.688918    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.689354    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.690840    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:27.686679    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.687417    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.688918    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.689354    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:27.690840    5339 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:27.694327  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:27.694339  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:27.723090  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:27.723129  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:05:29.037051  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:31.536823  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:30.253217  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:30.264373  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:30.264446  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:30.290413  620795 cri.go:89] found id: ""
	I1213 12:05:30.290440  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.290450  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:30.290457  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:30.290517  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:30.318052  620795 cri.go:89] found id: ""
	I1213 12:05:30.318079  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.318096  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:30.318104  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:30.318172  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:30.343233  620795 cri.go:89] found id: ""
	I1213 12:05:30.343267  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.343277  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:30.343283  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:30.343349  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:30.373053  620795 cri.go:89] found id: ""
	I1213 12:05:30.373077  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.373086  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:30.373092  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:30.373149  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:30.401783  620795 cri.go:89] found id: ""
	I1213 12:05:30.401862  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.401879  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:30.401886  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:30.401955  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:30.427557  620795 cri.go:89] found id: ""
	I1213 12:05:30.427580  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.427589  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:30.427595  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:30.427652  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:30.452324  620795 cri.go:89] found id: ""
	I1213 12:05:30.452404  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.452426  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:30.452445  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:30.452538  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:30.485213  620795 cri.go:89] found id: ""
	I1213 12:05:30.485283  620795 logs.go:282] 0 containers: []
	W1213 12:05:30.485307  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:30.485325  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:30.485337  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:30.567099  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:30.571250  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:30.599905  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:30.599987  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:30.671402  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:30.663820    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.664552    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.665833    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.666310    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.667892    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:30.663820    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.664552    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.665833    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.666310    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:30.667892    5453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:30.671475  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:30.671544  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:30.700275  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:30.700310  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:33.229307  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:33.240030  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:33.240101  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:33.264516  620795 cri.go:89] found id: ""
	I1213 12:05:33.264540  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.264550  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:33.264557  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:33.264622  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:33.288665  620795 cri.go:89] found id: ""
	I1213 12:05:33.288694  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.288704  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:33.288711  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:33.288772  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:33.318238  620795 cri.go:89] found id: ""
	I1213 12:05:33.318314  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.318338  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:33.318356  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:33.318437  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:33.342548  620795 cri.go:89] found id: ""
	I1213 12:05:33.342582  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.342592  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:33.342598  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:33.342667  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:33.368791  620795 cri.go:89] found id: ""
	I1213 12:05:33.368814  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.368823  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:33.368829  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:33.368887  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:33.395218  620795 cri.go:89] found id: ""
	I1213 12:05:33.395254  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.395263  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:33.395270  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:33.395342  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:33.422228  620795 cri.go:89] found id: ""
	I1213 12:05:33.422263  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.422272  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:33.422279  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:33.422345  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:33.448101  620795 cri.go:89] found id: ""
	I1213 12:05:33.448126  620795 logs.go:282] 0 containers: []
	W1213 12:05:33.448136  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:33.448146  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:33.448164  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:33.513958  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:33.513995  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:33.536519  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:33.536547  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:33.642718  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:33.634504    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.635083    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.636790    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.637471    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.638479    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:33.634504    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.635083    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.636790    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.637471    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:33.638479    5565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:33.642742  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:33.642757  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:33.671233  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:33.671268  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:05:34.036325  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:36.536291  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:36.205718  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:36.216490  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:36.216599  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:36.242239  620795 cri.go:89] found id: ""
	I1213 12:05:36.242267  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.242277  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:36.242284  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:36.242345  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:36.267114  620795 cri.go:89] found id: ""
	I1213 12:05:36.267140  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.267149  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:36.267155  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:36.267221  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:36.292484  620795 cri.go:89] found id: ""
	I1213 12:05:36.292510  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.292519  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:36.292525  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:36.292586  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:36.317342  620795 cri.go:89] found id: ""
	I1213 12:05:36.317365  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.317374  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:36.317380  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:36.317442  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:36.346675  620795 cri.go:89] found id: ""
	I1213 12:05:36.346746  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.346770  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:36.346788  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:36.346878  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:36.374350  620795 cri.go:89] found id: ""
	I1213 12:05:36.374416  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.374440  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:36.374459  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:36.374550  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:36.401836  620795 cri.go:89] found id: ""
	I1213 12:05:36.401904  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.401927  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:36.401947  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:36.402023  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:36.436530  620795 cri.go:89] found id: ""
	I1213 12:05:36.436612  620795 logs.go:282] 0 containers: []
	W1213 12:05:36.436635  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:36.436653  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:36.436680  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:36.464595  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:36.464663  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:36.550070  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:36.550121  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:36.581383  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:36.581414  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:36.674763  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:36.666501    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.667311    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.668765    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.669457    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.671114    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:36.666501    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.667311    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.668765    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.669457    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:36.671114    5691 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:36.674830  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:36.674854  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:39.203663  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:39.214134  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:39.214211  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	W1213 12:05:39.036349  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:41.036401  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:43.037206  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:39.240674  620795 cri.go:89] found id: ""
	I1213 12:05:39.240705  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.240714  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:39.240721  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:39.240786  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:39.265873  620795 cri.go:89] found id: ""
	I1213 12:05:39.265895  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.265903  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:39.265909  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:39.265966  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:39.291928  620795 cri.go:89] found id: ""
	I1213 12:05:39.291952  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.291960  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:39.291978  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:39.292037  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:39.317111  620795 cri.go:89] found id: ""
	I1213 12:05:39.317144  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.317153  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:39.317160  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:39.317219  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:39.341971  620795 cri.go:89] found id: ""
	I1213 12:05:39.341993  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.342002  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:39.342009  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:39.342065  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:39.370095  620795 cri.go:89] found id: ""
	I1213 12:05:39.370166  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.370192  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:39.370212  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:39.370297  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:39.396661  620795 cri.go:89] found id: ""
	I1213 12:05:39.396740  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.396765  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:39.396777  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:39.396855  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:39.426139  620795 cri.go:89] found id: ""
	I1213 12:05:39.426167  620795 logs.go:282] 0 containers: []
	W1213 12:05:39.426177  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:39.426188  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:39.426199  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:39.458970  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:39.459002  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:39.525484  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:39.525523  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:39.554066  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:39.554149  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:39.647487  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:39.639358    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.640049    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.641742    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.642473    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.644045    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:39.639358    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.640049    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.641742    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.642473    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:39.644045    5806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:39.647508  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:39.647543  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:42.175675  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:42.189064  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:42.189149  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:42.220105  620795 cri.go:89] found id: ""
	I1213 12:05:42.220135  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.220156  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:42.220164  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:42.220229  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:42.250459  620795 cri.go:89] found id: ""
	I1213 12:05:42.250486  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.250495  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:42.250502  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:42.250570  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:42.278746  620795 cri.go:89] found id: ""
	I1213 12:05:42.278773  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.278785  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:42.278793  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:42.278855  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:42.307046  620795 cri.go:89] found id: ""
	I1213 12:05:42.307073  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.307083  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:42.307092  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:42.307153  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:42.335010  620795 cri.go:89] found id: ""
	I1213 12:05:42.335035  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.335046  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:42.335052  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:42.335114  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:42.362128  620795 cri.go:89] found id: ""
	I1213 12:05:42.362154  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.362163  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:42.362170  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:42.362231  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:42.396146  620795 cri.go:89] found id: ""
	I1213 12:05:42.396175  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.396186  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:42.396193  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:42.396254  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:42.423111  620795 cri.go:89] found id: ""
	I1213 12:05:42.423137  620795 logs.go:282] 0 containers: []
	W1213 12:05:42.423146  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:42.423155  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:42.423167  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:42.440295  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:42.440325  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:42.504038  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:42.496153    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.496984    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.498582    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.499023    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.500536    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:42.496153    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.496984    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.498582    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.499023    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:42.500536    5900 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:42.504059  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:42.504071  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:42.550928  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:42.550966  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:42.608904  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:42.608935  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:05:45.037527  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:47.536245  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:45.181124  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:45.197731  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:45.197873  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:45.246027  620795 cri.go:89] found id: ""
	I1213 12:05:45.246070  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.246081  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:45.246106  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:45.246220  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:45.279332  620795 cri.go:89] found id: ""
	I1213 12:05:45.279388  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.279398  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:45.279404  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:45.279509  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:45.314910  620795 cri.go:89] found id: ""
	I1213 12:05:45.314988  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.315000  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:45.315010  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:45.315114  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:45.343055  620795 cri.go:89] found id: ""
	I1213 12:05:45.343130  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.343153  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:45.343175  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:45.343282  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:45.370166  620795 cri.go:89] found id: ""
	I1213 12:05:45.370240  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.370275  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:45.370299  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:45.370391  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:45.396456  620795 cri.go:89] found id: ""
	I1213 12:05:45.396480  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.396489  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:45.396495  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:45.396550  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:45.421687  620795 cri.go:89] found id: ""
	I1213 12:05:45.421711  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.421720  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:45.421726  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:45.421781  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:45.446648  620795 cri.go:89] found id: ""
	I1213 12:05:45.446672  620795 logs.go:282] 0 containers: []
	W1213 12:05:45.446681  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:45.446691  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:45.446702  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:45.512020  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:45.512055  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:45.543051  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:45.543084  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:45.640767  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:45.633029    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.633452    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.634983    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.635597    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.637148    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:45.633029    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.633452    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.634983    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.635597    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:45.637148    6022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:45.640789  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:45.640802  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:45.670787  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:45.670822  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:48.201632  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:48.211975  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:48.212046  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:48.241331  620795 cri.go:89] found id: ""
	I1213 12:05:48.241355  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.241364  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:48.241371  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:48.241430  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:48.266481  620795 cri.go:89] found id: ""
	I1213 12:05:48.266506  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.266515  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:48.266523  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:48.266581  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:48.292562  620795 cri.go:89] found id: ""
	I1213 12:05:48.292587  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.292597  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:48.292604  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:48.292666  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:48.316829  620795 cri.go:89] found id: ""
	I1213 12:05:48.316853  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.316862  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:48.316869  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:48.316928  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:48.341279  620795 cri.go:89] found id: ""
	I1213 12:05:48.341304  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.341313  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:48.341320  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:48.341395  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:48.370602  620795 cri.go:89] found id: ""
	I1213 12:05:48.370668  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.370684  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:48.370692  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:48.370757  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:48.395975  620795 cri.go:89] found id: ""
	I1213 12:05:48.396001  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.396011  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:48.396017  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:48.396076  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:48.422104  620795 cri.go:89] found id: ""
	I1213 12:05:48.422129  620795 logs.go:282] 0 containers: []
	W1213 12:05:48.422139  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:48.422150  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:48.422163  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:48.487414  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:48.487451  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:48.504893  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:48.504924  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:48.613440  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:48.605194    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.606037    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.607690    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.608269    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.609790    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:48.605194    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.606037    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.607690    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.608269    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:48.609790    6133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:48.613472  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:48.613485  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:48.643454  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:48.643496  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:05:49.537116  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:52.036281  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:51.173081  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:51.184091  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:51.184220  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:51.209714  620795 cri.go:89] found id: ""
	I1213 12:05:51.209741  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.209751  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:51.209757  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:51.209815  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:51.236381  620795 cri.go:89] found id: ""
	I1213 12:05:51.236414  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.236423  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:51.236429  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:51.236495  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:51.266394  620795 cri.go:89] found id: ""
	I1213 12:05:51.266428  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.266437  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:51.266443  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:51.266509  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:51.293949  620795 cri.go:89] found id: ""
	I1213 12:05:51.293981  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.293991  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:51.293998  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:51.294062  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:51.324019  620795 cri.go:89] found id: ""
	I1213 12:05:51.324042  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.324056  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:51.324062  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:51.324145  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:51.352992  620795 cri.go:89] found id: ""
	I1213 12:05:51.353023  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.353032  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:51.353039  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:51.353098  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:51.378872  620795 cri.go:89] found id: ""
	I1213 12:05:51.378898  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.378907  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:51.378914  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:51.378976  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:51.406670  620795 cri.go:89] found id: ""
	I1213 12:05:51.406695  620795 logs.go:282] 0 containers: []
	W1213 12:05:51.406703  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:51.406713  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:51.406728  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:51.469269  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:51.461277    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.461921    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.463438    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.463899    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.465468    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:51.461277    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.461921    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.463438    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.463899    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:51.465468    6240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:51.469290  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:51.469304  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:51.497318  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:51.497352  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:51.534646  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:51.534680  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:51.618348  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:51.618388  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:54.137197  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:54.147708  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:54.147778  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:54.173064  620795 cri.go:89] found id: ""
	I1213 12:05:54.173089  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.173098  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:54.173105  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:54.173164  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:54.198688  620795 cri.go:89] found id: ""
	I1213 12:05:54.198713  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.198723  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:54.198733  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:54.198789  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:54.224472  620795 cri.go:89] found id: ""
	I1213 12:05:54.224497  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.224506  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:54.224512  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:54.224571  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	W1213 12:05:54.536956  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:05:56.537169  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:05:54.254875  620795 cri.go:89] found id: ""
	I1213 12:05:54.254900  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.254909  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:54.254916  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:54.254985  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:54.286287  620795 cri.go:89] found id: ""
	I1213 12:05:54.286314  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.286322  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:54.286329  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:54.286384  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:54.312009  620795 cri.go:89] found id: ""
	I1213 12:05:54.312034  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.312043  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:54.312050  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:54.312109  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:54.338472  620795 cri.go:89] found id: ""
	I1213 12:05:54.338506  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.338516  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:54.338522  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:54.338590  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:54.363767  620795 cri.go:89] found id: ""
	I1213 12:05:54.363791  620795 logs.go:282] 0 containers: []
	W1213 12:05:54.363799  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:54.363810  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:54.363827  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:54.429426  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:54.429462  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:05:54.446820  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:54.446859  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:54.514113  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:54.505503    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.506092    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.507709    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.508420    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.510180    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:54.505503    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.506092    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.507709    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.508420    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:54.510180    6362 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:54.514137  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:54.514150  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:54.547597  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:54.547688  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:57.126156  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:05:57.136777  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:05:57.136854  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:05:57.166084  620795 cri.go:89] found id: ""
	I1213 12:05:57.166107  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.166116  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:05:57.166122  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:05:57.166180  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:05:57.194344  620795 cri.go:89] found id: ""
	I1213 12:05:57.194368  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.194377  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:05:57.194384  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:05:57.194445  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:05:57.220264  620795 cri.go:89] found id: ""
	I1213 12:05:57.220289  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.220298  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:05:57.220305  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:05:57.220362  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:05:57.245200  620795 cri.go:89] found id: ""
	I1213 12:05:57.245222  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.245230  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:05:57.245236  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:05:57.245292  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:05:57.272963  620795 cri.go:89] found id: ""
	I1213 12:05:57.272987  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.272996  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:05:57.273003  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:05:57.273061  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:05:57.297916  620795 cri.go:89] found id: ""
	I1213 12:05:57.297940  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.297947  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:05:57.297954  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:05:57.298016  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:05:57.323201  620795 cri.go:89] found id: ""
	I1213 12:05:57.323226  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.323235  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:05:57.323241  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:05:57.323301  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:05:57.348727  620795 cri.go:89] found id: ""
	I1213 12:05:57.348759  620795 logs.go:282] 0 containers: []
	W1213 12:05:57.348769  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:05:57.348779  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:05:57.348794  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:05:57.424991  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:05:57.416858    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.417506    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.419207    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.419713    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.421359    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:05:57.416858    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.417506    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.419207    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.419713    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:05:57.421359    6467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:05:57.425015  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:05:57.425027  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:05:57.454618  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:05:57.454652  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:05:57.482599  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:05:57.482627  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:05:57.556901  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:05:57.556982  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:05:58.537235  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:01.037253  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:00.078226  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:00.114729  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:00.114815  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:00.214510  620795 cri.go:89] found id: ""
	I1213 12:06:00.214537  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.214547  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:00.214560  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:00.214644  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:00.283401  620795 cri.go:89] found id: ""
	I1213 12:06:00.283433  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.283443  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:00.283450  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:00.283564  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:00.333853  620795 cri.go:89] found id: ""
	I1213 12:06:00.333946  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.333974  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:00.333999  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:00.334124  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:00.370564  620795 cri.go:89] found id: ""
	I1213 12:06:00.370647  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.370670  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:00.370693  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:00.370796  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:00.400318  620795 cri.go:89] found id: ""
	I1213 12:06:00.400355  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.400365  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:00.400373  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:00.400451  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:00.429349  620795 cri.go:89] found id: ""
	I1213 12:06:00.429376  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.429387  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:00.429394  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:00.429480  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:00.457513  620795 cri.go:89] found id: ""
	I1213 12:06:00.457540  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.457549  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:00.457555  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:00.457617  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:00.484050  620795 cri.go:89] found id: ""
	I1213 12:06:00.484077  620795 logs.go:282] 0 containers: []
	W1213 12:06:00.484086  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:00.484096  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:00.484110  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:00.564314  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:00.564357  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:00.586853  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:00.586884  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:00.678609  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:00.670112    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.670780    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.672403    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.672752    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.674443    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:00.670112    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.670780    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.672403    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.672752    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:00.674443    6590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:00.678679  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:00.678699  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:00.708726  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:00.708764  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:03.239868  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:03.250271  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:03.250342  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:03.278221  620795 cri.go:89] found id: ""
	I1213 12:06:03.278246  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.278254  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:03.278261  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:03.278323  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:03.307255  620795 cri.go:89] found id: ""
	I1213 12:06:03.307280  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.307288  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:03.307295  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:03.307358  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:03.334371  620795 cri.go:89] found id: ""
	I1213 12:06:03.334394  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.334402  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:03.334408  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:03.334465  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:03.359920  620795 cri.go:89] found id: ""
	I1213 12:06:03.359947  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.359959  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:03.359966  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:03.360026  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:03.388349  620795 cri.go:89] found id: ""
	I1213 12:06:03.388373  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.388382  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:03.388389  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:03.388446  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:03.413684  620795 cri.go:89] found id: ""
	I1213 12:06:03.413712  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.413721  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:03.413727  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:03.413786  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:03.438590  620795 cri.go:89] found id: ""
	I1213 12:06:03.438613  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.438622  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:03.438629  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:03.438686  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:03.466031  620795 cri.go:89] found id: ""
	I1213 12:06:03.466065  620795 logs.go:282] 0 containers: []
	W1213 12:06:03.466074  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:03.466084  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:03.466095  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:03.540002  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:03.540037  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:03.581254  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:03.581285  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:03.657609  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:03.648962    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.649736    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.651545    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.652112    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.653889    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:03.648962    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.649736    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.651545    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.652112    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:03.653889    6706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:03.657641  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:03.657654  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:03.686248  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:03.686284  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:06:03.537138  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:05.537188  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:07.537266  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:06.215254  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:06.226059  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:06.226130  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:06.252206  620795 cri.go:89] found id: ""
	I1213 12:06:06.252229  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.252237  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:06.252243  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:06.252306  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:06.282327  620795 cri.go:89] found id: ""
	I1213 12:06:06.282349  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.282358  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:06.282364  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:06.282425  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:06.312866  620795 cri.go:89] found id: ""
	I1213 12:06:06.312889  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.312898  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:06.312905  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:06.312964  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:06.339757  620795 cri.go:89] found id: ""
	I1213 12:06:06.339828  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.339851  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:06.339865  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:06.339937  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:06.366465  620795 cri.go:89] found id: ""
	I1213 12:06:06.366491  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.366508  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:06.366515  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:06.366589  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:06.395704  620795 cri.go:89] found id: ""
	I1213 12:06:06.395727  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.395735  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:06.395742  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:06.395800  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:06.420941  620795 cri.go:89] found id: ""
	I1213 12:06:06.420966  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.420974  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:06.420981  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:06.421040  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:06.446747  620795 cri.go:89] found id: ""
	I1213 12:06:06.446771  620795 logs.go:282] 0 containers: []
	W1213 12:06:06.446781  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:06.446790  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:06.446802  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:06.515396  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:06.515437  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:06.537368  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:06.537458  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:06.638118  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:06.626710    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.630084    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.630705    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.632330    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.632805    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:06.626710    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.630084    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.630705    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.632330    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:06.632805    6824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:06.638202  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:06.638230  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:06.668749  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:06.668789  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:09.204205  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:09.214694  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:09.214763  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	W1213 12:06:10.037386  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:12.536953  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:09.240252  620795 cri.go:89] found id: ""
	I1213 12:06:09.240291  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.240301  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:09.240307  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:09.240372  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:09.267161  620795 cri.go:89] found id: ""
	I1213 12:06:09.267188  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.267197  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:09.267203  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:09.267263  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:09.292472  620795 cri.go:89] found id: ""
	I1213 12:06:09.292501  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.292510  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:09.292517  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:09.292581  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:09.317718  620795 cri.go:89] found id: ""
	I1213 12:06:09.317745  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.317754  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:09.317760  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:09.317819  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:09.342979  620795 cri.go:89] found id: ""
	I1213 12:06:09.343006  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.343015  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:09.343021  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:09.343080  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:09.370344  620795 cri.go:89] found id: ""
	I1213 12:06:09.370368  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.370377  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:09.370383  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:09.370441  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:09.397428  620795 cri.go:89] found id: ""
	I1213 12:06:09.397451  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.397461  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:09.397467  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:09.397527  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:09.422862  620795 cri.go:89] found id: ""
	I1213 12:06:09.422890  620795 logs.go:282] 0 containers: []
	W1213 12:06:09.422900  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:09.422909  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:09.422923  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:09.486031  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:09.478519    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.478948    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.480477    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.480972    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.482466    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:09.478519    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.478948    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.480477    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.480972    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:09.482466    6925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:09.486057  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:09.486070  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:09.514736  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:09.514772  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:09.586482  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:09.586558  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:09.660422  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:09.660459  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:12.179299  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:12.190230  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:12.190302  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:12.216052  620795 cri.go:89] found id: ""
	I1213 12:06:12.216076  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.216085  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:12.216092  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:12.216150  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:12.245417  620795 cri.go:89] found id: ""
	I1213 12:06:12.245443  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.245453  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:12.245460  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:12.245525  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:12.272357  620795 cri.go:89] found id: ""
	I1213 12:06:12.272382  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.272391  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:12.272397  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:12.272459  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:12.297431  620795 cri.go:89] found id: ""
	I1213 12:06:12.297458  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.297467  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:12.297479  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:12.297537  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:12.322773  620795 cri.go:89] found id: ""
	I1213 12:06:12.322796  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.322805  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:12.322829  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:12.322894  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:12.348212  620795 cri.go:89] found id: ""
	I1213 12:06:12.348278  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.348293  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:12.348301  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:12.348360  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:12.378078  620795 cri.go:89] found id: ""
	I1213 12:06:12.378105  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.378115  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:12.378122  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:12.378186  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:12.403938  620795 cri.go:89] found id: ""
	I1213 12:06:12.404005  620795 logs.go:282] 0 containers: []
	W1213 12:06:12.404029  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:12.404044  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:12.404056  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:12.432395  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:12.432433  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:12.465021  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:12.465055  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:12.533527  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:12.533564  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:12.557847  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:12.557876  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:12.649280  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:12.641558    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.641947    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.643630    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.644072    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.645646    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:12.641558    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.641947    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.643630    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.644072    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:12.645646    7062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:06:15.036244  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:17.037163  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:15.150199  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:15.161093  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:15.161164  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:15.188375  620795 cri.go:89] found id: ""
	I1213 12:06:15.188402  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.188411  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:15.188420  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:15.188494  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:15.213569  620795 cri.go:89] found id: ""
	I1213 12:06:15.213592  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.213601  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:15.213607  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:15.213667  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:15.244468  620795 cri.go:89] found id: ""
	I1213 12:06:15.244490  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.244499  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:15.244505  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:15.244565  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:15.269446  620795 cri.go:89] found id: ""
	I1213 12:06:15.269469  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.269478  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:15.269484  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:15.269544  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:15.297921  620795 cri.go:89] found id: ""
	I1213 12:06:15.297947  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.297957  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:15.297965  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:15.298029  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:15.323225  620795 cri.go:89] found id: ""
	I1213 12:06:15.323248  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.323256  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:15.323263  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:15.323322  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:15.349965  620795 cri.go:89] found id: ""
	I1213 12:06:15.349988  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.349999  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:15.350005  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:15.350067  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:15.378207  620795 cri.go:89] found id: ""
	I1213 12:06:15.378236  620795 logs.go:282] 0 containers: []
	W1213 12:06:15.378247  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:15.378258  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:15.378271  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:15.443150  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:15.443182  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:15.459353  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:15.459388  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:15.546545  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:15.517236    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.519883    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.520609    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.528433    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.536550    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:15.517236    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.519883    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.520609    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.528433    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:15.536550    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:15.546611  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:15.546638  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:15.582173  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:15.582258  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:18.126037  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:18.137115  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:18.137190  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:18.164991  620795 cri.go:89] found id: ""
	I1213 12:06:18.165017  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.165026  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:18.165033  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:18.165092  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:18.191806  620795 cri.go:89] found id: ""
	I1213 12:06:18.191832  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.191841  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:18.191848  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:18.191906  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:18.222284  620795 cri.go:89] found id: ""
	I1213 12:06:18.222310  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.222320  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:18.222329  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:18.222389  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:18.250305  620795 cri.go:89] found id: ""
	I1213 12:06:18.250332  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.250342  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:18.250348  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:18.250406  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:18.276798  620795 cri.go:89] found id: ""
	I1213 12:06:18.276823  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.276833  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:18.276841  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:18.276901  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:18.301916  620795 cri.go:89] found id: ""
	I1213 12:06:18.301943  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.301952  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:18.301959  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:18.302017  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:18.327545  620795 cri.go:89] found id: ""
	I1213 12:06:18.327569  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.327577  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:18.327584  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:18.327681  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:18.352817  620795 cri.go:89] found id: ""
	I1213 12:06:18.352844  620795 logs.go:282] 0 containers: []
	W1213 12:06:18.352854  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:18.352863  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:18.352902  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:18.418564  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:18.418601  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:18.434897  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:18.434928  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:18.499340  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:18.490649    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.491423    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.492978    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.493531    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.495112    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:18.490649    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.491423    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.492978    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.493531    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:18.495112    7274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:18.499366  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:18.499380  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:18.528897  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:18.528980  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:06:19.537261  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:22.037303  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:21.104122  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:21.114671  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:21.114786  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:21.140990  620795 cri.go:89] found id: ""
	I1213 12:06:21.141014  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.141024  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:21.141030  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:21.141087  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:21.168480  620795 cri.go:89] found id: ""
	I1213 12:06:21.168510  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.168519  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:21.168526  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:21.168583  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:21.193893  620795 cri.go:89] found id: ""
	I1213 12:06:21.193916  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.193924  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:21.193930  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:21.193985  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:21.222789  620795 cri.go:89] found id: ""
	I1213 12:06:21.222811  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.222820  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:21.222827  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:21.222885  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:21.254379  620795 cri.go:89] found id: ""
	I1213 12:06:21.254402  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.254411  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:21.254417  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:21.254476  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:21.280020  620795 cri.go:89] found id: ""
	I1213 12:06:21.280049  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.280058  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:21.280065  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:21.280123  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:21.305920  620795 cri.go:89] found id: ""
	I1213 12:06:21.305942  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.305952  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:21.305957  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:21.306031  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:21.334376  620795 cri.go:89] found id: ""
	I1213 12:06:21.334400  620795 logs.go:282] 0 containers: []
	W1213 12:06:21.334409  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:21.334417  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:21.334429  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:21.362868  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:21.362906  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:21.397678  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:21.397727  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:21.465535  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:21.465574  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:21.482417  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:21.482443  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:21.566636  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:21.557499    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.558882    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.559834    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.561441    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.561752    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:21.557499    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.558882    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.559834    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.561441    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:21.561752    7401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:24.068339  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:24.079607  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:24.079684  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:24.105575  620795 cri.go:89] found id: ""
	I1213 12:06:24.105609  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.105619  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:24.105626  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:24.105696  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:24.131798  620795 cri.go:89] found id: ""
	I1213 12:06:24.131830  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.131840  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:24.131846  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:24.131905  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:24.157068  620795 cri.go:89] found id: ""
	I1213 12:06:24.157096  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.157106  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:24.157113  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:24.157168  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:24.186737  620795 cri.go:89] found id: ""
	I1213 12:06:24.186762  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.186772  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:24.186779  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:24.186843  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:24.214700  620795 cri.go:89] found id: ""
	I1213 12:06:24.214726  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.214745  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:24.214751  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:24.214815  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	W1213 12:06:24.537013  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:27.037104  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:24.242048  620795 cri.go:89] found id: ""
	I1213 12:06:24.242074  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.242083  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:24.242090  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:24.242180  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:24.270953  620795 cri.go:89] found id: ""
	I1213 12:06:24.270978  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.270987  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:24.270994  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:24.271074  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:24.296220  620795 cri.go:89] found id: ""
	I1213 12:06:24.296246  620795 logs.go:282] 0 containers: []
	W1213 12:06:24.296256  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:24.296267  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:24.296278  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:24.325330  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:24.325367  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:24.355217  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:24.355255  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:24.421526  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:24.421566  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:24.438978  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:24.439012  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:24.514169  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:24.505564    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.506202    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.507961    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.508730    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.510229    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:24.505564    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.506202    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.507961    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.508730    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:24.510229    7519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:27.015192  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:27.026779  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:27.026871  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:27.054321  620795 cri.go:89] found id: ""
	I1213 12:06:27.054347  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.054357  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:27.054364  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:27.054423  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:27.084443  620795 cri.go:89] found id: ""
	I1213 12:06:27.084467  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.084476  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:27.084482  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:27.084542  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:27.110224  620795 cri.go:89] found id: ""
	I1213 12:06:27.110251  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.110260  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:27.110267  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:27.110326  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:27.141821  620795 cri.go:89] found id: ""
	I1213 12:06:27.141847  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.141857  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:27.141863  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:27.141953  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:27.168110  620795 cri.go:89] found id: ""
	I1213 12:06:27.168143  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.168153  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:27.168160  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:27.168228  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:27.193708  620795 cri.go:89] found id: ""
	I1213 12:06:27.193775  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.193791  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:27.193802  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:27.193862  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:27.220542  620795 cri.go:89] found id: ""
	I1213 12:06:27.220569  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.220578  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:27.220585  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:27.220673  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:27.248536  620795 cri.go:89] found id: ""
	I1213 12:06:27.248614  620795 logs.go:282] 0 containers: []
	W1213 12:06:27.248630  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:27.248641  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:27.248653  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:27.314354  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:27.314389  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:27.331795  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:27.331824  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:27.397269  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:27.389020    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.389779    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.391484    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.391978    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.393471    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:27.389020    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.389779    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.391484    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.391978    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:27.393471    7623 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:27.397290  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:27.397303  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:27.425995  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:27.426034  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:06:29.537185  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:32.037043  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:29.964336  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:29.975190  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:29.975264  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:30.020235  620795 cri.go:89] found id: ""
	I1213 12:06:30.020330  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.020353  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:30.020373  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:30.020492  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:30.064384  620795 cri.go:89] found id: ""
	I1213 12:06:30.064422  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.064431  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:30.064438  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:30.064537  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:30.093930  620795 cri.go:89] found id: ""
	I1213 12:06:30.093974  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.094003  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:30.094018  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:30.094092  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:30.121799  620795 cri.go:89] found id: ""
	I1213 12:06:30.121830  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.121846  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:30.121854  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:30.121994  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:30.150127  620795 cri.go:89] found id: ""
	I1213 12:06:30.150153  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.150163  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:30.150170  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:30.150232  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:30.177848  620795 cri.go:89] found id: ""
	I1213 12:06:30.177873  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.177883  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:30.177889  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:30.177948  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:30.204179  620795 cri.go:89] found id: ""
	I1213 12:06:30.204216  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.204225  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:30.204235  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:30.204295  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:30.230625  620795 cri.go:89] found id: ""
	I1213 12:06:30.230653  620795 logs.go:282] 0 containers: []
	W1213 12:06:30.230663  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:30.230673  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:30.230685  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:30.297598  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:30.297634  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:30.314962  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:30.314993  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:30.380114  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:30.371745    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.372555    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.374185    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.374477    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.376001    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:30.371745    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.372555    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.374185    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.374477    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:30.376001    7737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:30.380136  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:30.380148  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:30.408485  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:30.408523  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:32.936773  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:32.947334  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:32.947408  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:32.974265  620795 cri.go:89] found id: ""
	I1213 12:06:32.974291  620795 logs.go:282] 0 containers: []
	W1213 12:06:32.974300  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:32.974307  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:32.974365  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:33.005585  620795 cri.go:89] found id: ""
	I1213 12:06:33.005616  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.005627  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:33.005633  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:33.005704  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:33.036036  620795 cri.go:89] found id: ""
	I1213 12:06:33.036058  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.036072  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:33.036079  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:33.036136  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:33.062415  620795 cri.go:89] found id: ""
	I1213 12:06:33.062439  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.062448  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:33.062455  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:33.062515  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:33.091004  620795 cri.go:89] found id: ""
	I1213 12:06:33.091072  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.091095  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:33.091115  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:33.091193  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:33.116964  620795 cri.go:89] found id: ""
	I1213 12:06:33.116989  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.116999  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:33.117005  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:33.117084  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:33.143886  620795 cri.go:89] found id: ""
	I1213 12:06:33.143908  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.143918  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:33.143924  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:33.143984  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:33.177672  620795 cri.go:89] found id: ""
	I1213 12:06:33.177697  620795 logs.go:282] 0 containers: []
	W1213 12:06:33.177707  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:33.177716  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:33.177728  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:33.194235  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:33.194266  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:33.258679  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:33.250574    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.251172    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.252678    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.253209    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.254656    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:33.250574    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.251172    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.252678    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.253209    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:33.254656    7844 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:33.258703  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:33.258715  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:33.287694  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:33.287731  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:33.319142  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:33.319168  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:06:34.037106  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:36.037218  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:35.883653  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:35.894470  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:35.894540  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:35.922164  620795 cri.go:89] found id: ""
	I1213 12:06:35.922243  620795 logs.go:282] 0 containers: []
	W1213 12:06:35.922268  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:35.922286  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:35.922378  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:35.948794  620795 cri.go:89] found id: ""
	I1213 12:06:35.948824  620795 logs.go:282] 0 containers: []
	W1213 12:06:35.948833  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:35.948840  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:35.948916  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:35.976985  620795 cri.go:89] found id: ""
	I1213 12:06:35.977012  620795 logs.go:282] 0 containers: []
	W1213 12:06:35.977023  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:35.977030  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:35.977097  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:36.008179  620795 cri.go:89] found id: ""
	I1213 12:06:36.008210  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.008221  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:36.008229  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:36.008306  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:36.037414  620795 cri.go:89] found id: ""
	I1213 12:06:36.037434  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.037442  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:36.037448  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:36.037505  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:36.066253  620795 cri.go:89] found id: ""
	I1213 12:06:36.066290  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.066304  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:36.066319  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:36.066394  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:36.093841  620795 cri.go:89] found id: ""
	I1213 12:06:36.093938  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.093955  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:36.093963  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:36.094042  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:36.119692  620795 cri.go:89] found id: ""
	I1213 12:06:36.119728  620795 logs.go:282] 0 containers: []
	W1213 12:06:36.119737  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:36.119747  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:36.119761  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:36.136247  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:36.136322  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:36.202464  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:36.194729    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.195344    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.196865    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.197429    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.198995    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:36.194729    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.195344    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.196865    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.197429    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:36.198995    7961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:36.202486  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:36.202500  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:36.230571  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:36.230606  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:36.257928  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:36.257955  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:38.826068  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:38.841833  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:38.841915  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:38.871763  620795 cri.go:89] found id: ""
	I1213 12:06:38.871788  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.871797  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:38.871803  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:38.871870  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:38.897931  620795 cri.go:89] found id: ""
	I1213 12:06:38.897956  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.897966  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:38.897972  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:38.898064  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:38.928095  620795 cri.go:89] found id: ""
	I1213 12:06:38.928121  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.928131  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:38.928138  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:38.928202  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:38.954066  620795 cri.go:89] found id: ""
	I1213 12:06:38.954090  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.954098  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:38.954105  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:38.954168  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:38.978723  620795 cri.go:89] found id: ""
	I1213 12:06:38.978752  620795 logs.go:282] 0 containers: []
	W1213 12:06:38.978762  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:38.978769  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:38.978825  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:39.006341  620795 cri.go:89] found id: ""
	I1213 12:06:39.006374  620795 logs.go:282] 0 containers: []
	W1213 12:06:39.006383  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:39.006390  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:39.006462  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:39.032585  620795 cri.go:89] found id: ""
	I1213 12:06:39.032612  620795 logs.go:282] 0 containers: []
	W1213 12:06:39.032622  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:39.032629  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:39.032699  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:39.061395  620795 cri.go:89] found id: ""
	I1213 12:06:39.061426  620795 logs.go:282] 0 containers: []
	W1213 12:06:39.061436  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:39.061446  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:39.061457  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:39.091343  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:39.091367  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:39.160940  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:39.160987  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:39.177451  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:39.177490  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:38.536279  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:40.537278  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:43.037128  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:39.246489  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:39.238660    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.239263    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.241330    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.241646    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.243151    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:39.238660    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.239263    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.241330    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.241646    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:39.243151    8090 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:39.246510  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:39.246524  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:41.775639  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:41.794476  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:41.794600  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:41.831000  620795 cri.go:89] found id: ""
	I1213 12:06:41.831074  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.831102  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:41.831121  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:41.831203  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:41.872779  620795 cri.go:89] found id: ""
	I1213 12:06:41.872806  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.872816  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:41.872823  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:41.872903  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:41.902394  620795 cri.go:89] found id: ""
	I1213 12:06:41.902420  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.902429  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:41.902435  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:41.902494  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:41.929459  620795 cri.go:89] found id: ""
	I1213 12:06:41.929485  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.929494  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:41.929501  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:41.929563  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:41.955676  620795 cri.go:89] found id: ""
	I1213 12:06:41.955700  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.955716  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:41.955724  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:41.955783  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:41.981839  620795 cri.go:89] found id: ""
	I1213 12:06:41.981865  620795 logs.go:282] 0 containers: []
	W1213 12:06:41.981875  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:41.981882  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:41.981939  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:42.021720  620795 cri.go:89] found id: ""
	I1213 12:06:42.021808  620795 logs.go:282] 0 containers: []
	W1213 12:06:42.021827  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:42.021836  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:42.021908  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:42.052304  620795 cri.go:89] found id: ""
	I1213 12:06:42.052332  620795 logs.go:282] 0 containers: []
	W1213 12:06:42.052341  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:42.052351  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:42.052382  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:42.071214  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:42.071250  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:42.151103  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:42.141536    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.142506    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.144362    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.144822    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.146635    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:42.141536    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.142506    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.144362    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.144822    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:42.146635    8191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:42.151127  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:42.151146  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:42.183473  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:42.183646  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:42.226797  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:42.226834  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:06:45.037308  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:47.537265  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:44.796943  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:44.821281  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:44.821413  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:44.863598  620795 cri.go:89] found id: ""
	I1213 12:06:44.863672  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.863697  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:44.863718  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:44.863805  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:44.892309  620795 cri.go:89] found id: ""
	I1213 12:06:44.892395  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.892418  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:44.892438  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:44.892552  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:44.918444  620795 cri.go:89] found id: ""
	I1213 12:06:44.918522  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.918557  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:44.918581  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:44.918673  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:44.944223  620795 cri.go:89] found id: ""
	I1213 12:06:44.944249  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.944258  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:44.944265  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:44.944327  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:44.970515  620795 cri.go:89] found id: ""
	I1213 12:06:44.970548  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.970559  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:44.970566  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:44.970626  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:44.996938  620795 cri.go:89] found id: ""
	I1213 12:06:44.996966  620795 logs.go:282] 0 containers: []
	W1213 12:06:44.996976  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:44.996983  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:44.997050  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:45.050971  620795 cri.go:89] found id: ""
	I1213 12:06:45.051001  620795 logs.go:282] 0 containers: []
	W1213 12:06:45.051020  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:45.051028  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:45.051107  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:45.095037  620795 cri.go:89] found id: ""
	I1213 12:06:45.095076  620795 logs.go:282] 0 containers: []
	W1213 12:06:45.095087  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:45.095098  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:45.095116  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:45.209528  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:45.209618  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:45.240275  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:45.240311  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:45.322872  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:45.312425    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.313157    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.314727    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.315938    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.316890    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:45.312425    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.313157    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.314727    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.315938    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:45.316890    8304 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:45.322895  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:45.322909  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:45.353126  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:45.353162  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:47.883672  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:47.894317  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:47.894394  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:47.920883  620795 cri.go:89] found id: ""
	I1213 12:06:47.920909  620795 logs.go:282] 0 containers: []
	W1213 12:06:47.920919  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:47.920927  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:47.920985  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:47.947168  620795 cri.go:89] found id: ""
	I1213 12:06:47.947197  620795 logs.go:282] 0 containers: []
	W1213 12:06:47.947207  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:47.947214  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:47.947279  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:47.972678  620795 cri.go:89] found id: ""
	I1213 12:06:47.972701  620795 logs.go:282] 0 containers: []
	W1213 12:06:47.972710  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:47.972717  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:47.972779  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:48.010849  620795 cri.go:89] found id: ""
	I1213 12:06:48.010915  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.010939  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:48.010961  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:48.011038  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:48.040005  620795 cri.go:89] found id: ""
	I1213 12:06:48.040074  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.040098  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:48.040118  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:48.040211  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:48.067778  620795 cri.go:89] found id: ""
	I1213 12:06:48.067806  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.067815  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:48.067822  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:48.067884  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:48.096165  620795 cri.go:89] found id: ""
	I1213 12:06:48.096207  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.096218  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:48.096224  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:48.096297  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:48.123725  620795 cri.go:89] found id: ""
	I1213 12:06:48.123761  620795 logs.go:282] 0 containers: []
	W1213 12:06:48.123771  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:48.123781  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:48.123793  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:48.153693  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:48.153733  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:48.185148  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:48.185227  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:48.251689  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:48.251724  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:48.269048  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:48.269079  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:48.336435  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:48.328704    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.329312    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.330862    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.331331    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.332839    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:48.328704    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.329312    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.330862    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.331331    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:48.332839    8430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:06:50.037084  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:52.037310  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:50.836744  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:50.848522  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:50.848593  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:50.874981  620795 cri.go:89] found id: ""
	I1213 12:06:50.875065  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.875088  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:50.875108  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:50.875219  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:50.900176  620795 cri.go:89] found id: ""
	I1213 12:06:50.900203  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.900213  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:50.900219  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:50.900277  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:50.929844  620795 cri.go:89] found id: ""
	I1213 12:06:50.929869  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.929878  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:50.929885  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:50.929943  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:50.955008  620795 cri.go:89] found id: ""
	I1213 12:06:50.955033  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.955042  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:50.955049  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:50.955104  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:50.982109  620795 cri.go:89] found id: ""
	I1213 12:06:50.982134  620795 logs.go:282] 0 containers: []
	W1213 12:06:50.982143  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:50.982149  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:50.982211  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:51.013066  620795 cri.go:89] found id: ""
	I1213 12:06:51.013144  620795 logs.go:282] 0 containers: []
	W1213 12:06:51.013160  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:51.013168  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:51.013236  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:51.042207  620795 cri.go:89] found id: ""
	I1213 12:06:51.042233  620795 logs.go:282] 0 containers: []
	W1213 12:06:51.042243  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:51.042250  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:51.042315  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:51.068089  620795 cri.go:89] found id: ""
	I1213 12:06:51.068116  620795 logs.go:282] 0 containers: []
	W1213 12:06:51.068125  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:51.068135  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:51.068146  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:51.136510  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:51.136550  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:51.153539  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:51.153567  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:51.227168  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:51.219231    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.219823    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.221668    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.222081    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.223742    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:51.219231    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.219823    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.221668    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.222081    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:51.223742    8531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:51.227240  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:51.227271  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:51.256505  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:51.256541  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:53.786599  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:53.808412  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:53.808498  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:53.866097  620795 cri.go:89] found id: ""
	I1213 12:06:53.866124  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.866133  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:53.866140  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:53.866197  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:53.896398  620795 cri.go:89] found id: ""
	I1213 12:06:53.896426  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.896435  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:53.896442  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:53.896499  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:53.922228  620795 cri.go:89] found id: ""
	I1213 12:06:53.922255  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.922265  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:53.922271  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:53.922333  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:53.947081  620795 cri.go:89] found id: ""
	I1213 12:06:53.947107  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.947116  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:53.947123  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:53.947177  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:53.972340  620795 cri.go:89] found id: ""
	I1213 12:06:53.972365  620795 logs.go:282] 0 containers: []
	W1213 12:06:53.972374  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:53.972381  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:53.972437  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:54.000806  620795 cri.go:89] found id: ""
	I1213 12:06:54.000835  620795 logs.go:282] 0 containers: []
	W1213 12:06:54.000844  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:54.000851  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:54.000925  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:54.030584  620795 cri.go:89] found id: ""
	I1213 12:06:54.030617  620795 logs.go:282] 0 containers: []
	W1213 12:06:54.030626  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:54.030648  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:54.030734  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:54.056807  620795 cri.go:89] found id: ""
	I1213 12:06:54.056833  620795 logs.go:282] 0 containers: []
	W1213 12:06:54.056842  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:54.056877  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:54.056897  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:54.122299  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:54.122347  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:54.139911  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:54.139944  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:54.202433  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:54.194761    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.195486    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.197123    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.197444    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.198946    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:54.194761    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.195486    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.197123    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.197444    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:54.198946    8647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:06:54.202453  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:54.202466  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:54.230939  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:54.230977  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:06:54.536621  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:06:56.537197  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:56.761244  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:56.773199  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:56.773280  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:56.833295  620795 cri.go:89] found id: ""
	I1213 12:06:56.833323  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.833338  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:56.833345  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:56.833410  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:56.877141  620795 cri.go:89] found id: ""
	I1213 12:06:56.877179  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.877189  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:56.877195  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:56.877255  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:56.909304  620795 cri.go:89] found id: ""
	I1213 12:06:56.909329  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.909337  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:56.909344  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:56.909402  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:56.937175  620795 cri.go:89] found id: ""
	I1213 12:06:56.937206  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.937215  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:56.937222  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:56.937283  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:56.962816  620795 cri.go:89] found id: ""
	I1213 12:06:56.962839  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.962848  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:56.962854  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:56.962909  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:56.988340  620795 cri.go:89] found id: ""
	I1213 12:06:56.988364  620795 logs.go:282] 0 containers: []
	W1213 12:06:56.988372  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:56.988379  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:56.988438  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:57.014873  620795 cri.go:89] found id: ""
	I1213 12:06:57.014956  620795 logs.go:282] 0 containers: []
	W1213 12:06:57.014979  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:57.014997  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:57.015107  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:06:57.042222  620795 cri.go:89] found id: ""
	I1213 12:06:57.042295  620795 logs.go:282] 0 containers: []
	W1213 12:06:57.042331  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:06:57.042357  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:06:57.042383  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:06:57.070110  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:06:57.070148  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:06:57.097788  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:06:57.097812  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:06:57.164029  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:06:57.164067  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:06:57.182586  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:06:57.182619  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:06:57.253568  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:06:57.245349    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.246144    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.247745    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.248303    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.249920    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:06:57.245349    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.246144    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.247745    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.248303    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:06:57.249920    8771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:06:59.037110  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:01.537092  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:06:59.753877  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:06:59.764872  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:06:59.764943  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:06:59.794978  620795 cri.go:89] found id: ""
	I1213 12:06:59.795002  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.795016  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:06:59.795027  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:06:59.795086  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:06:59.832235  620795 cri.go:89] found id: ""
	I1213 12:06:59.832264  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.832276  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:06:59.832283  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:06:59.832342  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:06:59.879189  620795 cri.go:89] found id: ""
	I1213 12:06:59.879217  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.879227  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:06:59.879233  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:06:59.879296  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:06:59.906738  620795 cri.go:89] found id: ""
	I1213 12:06:59.906766  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.906775  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:06:59.906782  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:06:59.906838  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:06:59.934746  620795 cri.go:89] found id: ""
	I1213 12:06:59.934774  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.934783  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:06:59.934790  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:06:59.934852  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:06:59.962016  620795 cri.go:89] found id: ""
	I1213 12:06:59.962049  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.962059  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:06:59.962066  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:06:59.962123  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:06:59.988024  620795 cri.go:89] found id: ""
	I1213 12:06:59.988047  620795 logs.go:282] 0 containers: []
	W1213 12:06:59.988056  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:06:59.988062  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:06:59.988118  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:00.062022  620795 cri.go:89] found id: ""
	I1213 12:07:00.062049  620795 logs.go:282] 0 containers: []
	W1213 12:07:00.062059  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:00.062076  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:00.062094  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:00.179599  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:00.181365  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:00.211914  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:00.211958  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:00.303311  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:00.290980    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.291674    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.293924    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.295005    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.295928    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:00.290980    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.291674    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.293924    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.295005    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:00.295928    8870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:00.303333  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:00.303347  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:00.339996  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:00.340039  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:02.882696  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:02.898926  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:02.899000  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:02.928919  620795 cri.go:89] found id: ""
	I1213 12:07:02.928949  620795 logs.go:282] 0 containers: []
	W1213 12:07:02.928959  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:02.928967  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:02.929030  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:02.955168  620795 cri.go:89] found id: ""
	I1213 12:07:02.955194  620795 logs.go:282] 0 containers: []
	W1213 12:07:02.955209  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:02.955215  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:02.955273  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:02.984105  620795 cri.go:89] found id: ""
	I1213 12:07:02.984132  620795 logs.go:282] 0 containers: []
	W1213 12:07:02.984141  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:02.984159  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:02.984220  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:03.011185  620795 cri.go:89] found id: ""
	I1213 12:07:03.011210  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.011219  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:03.011227  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:03.011289  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:03.038557  620795 cri.go:89] found id: ""
	I1213 12:07:03.038580  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.038588  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:03.038594  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:03.038656  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:03.064610  620795 cri.go:89] found id: ""
	I1213 12:07:03.064650  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.064661  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:03.064667  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:03.064725  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:03.090406  620795 cri.go:89] found id: ""
	I1213 12:07:03.090432  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.090441  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:03.090447  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:03.090506  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:03.117733  620795 cri.go:89] found id: ""
	I1213 12:07:03.117761  620795 logs.go:282] 0 containers: []
	W1213 12:07:03.117770  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:03.117780  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:03.117792  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:03.185975  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:03.177634    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.178390    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.180015    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.180554    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.182089    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:03.177634    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.178390    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.180015    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.180554    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:03.182089    8980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:03.185999  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:03.186011  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:03.214353  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:03.214387  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:03.244844  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:03.244873  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:03.310569  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:03.310608  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:07:04.037144  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:06.537015  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:05.828010  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:05.840499  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:05.840570  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:05.867194  620795 cri.go:89] found id: ""
	I1213 12:07:05.867272  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.867295  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:05.867314  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:05.867394  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:05.894013  620795 cri.go:89] found id: ""
	I1213 12:07:05.894044  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.894054  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:05.894061  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:05.894126  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:05.920207  620795 cri.go:89] found id: ""
	I1213 12:07:05.920234  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.920244  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:05.920250  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:05.920309  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:05.948255  620795 cri.go:89] found id: ""
	I1213 12:07:05.948280  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.948289  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:05.948295  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:05.948352  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:05.975137  620795 cri.go:89] found id: ""
	I1213 12:07:05.975162  620795 logs.go:282] 0 containers: []
	W1213 12:07:05.975211  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:05.975222  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:05.975283  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:06.006992  620795 cri.go:89] found id: ""
	I1213 12:07:06.007020  620795 logs.go:282] 0 containers: []
	W1213 12:07:06.007030  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:06.007037  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:06.007106  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:06.035032  620795 cri.go:89] found id: ""
	I1213 12:07:06.035067  620795 logs.go:282] 0 containers: []
	W1213 12:07:06.035077  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:06.035084  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:06.035157  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:06.066833  620795 cri.go:89] found id: ""
	I1213 12:07:06.066865  620795 logs.go:282] 0 containers: []
	W1213 12:07:06.066875  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:06.066885  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:06.066899  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:06.134254  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:06.125473    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.125887    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.127536    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.128260    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.129881    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:06.125473    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.125887    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.127536    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.128260    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:06.129881    9092 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:06.134284  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:06.134297  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:06.163816  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:06.163852  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:06.194055  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:06.194084  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:06.262450  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:06.262550  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:08.779798  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:08.793568  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:08.793654  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:08.848358  620795 cri.go:89] found id: ""
	I1213 12:07:08.848399  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.848408  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:08.848415  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:08.848485  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:08.881239  620795 cri.go:89] found id: ""
	I1213 12:07:08.881268  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.881278  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:08.881284  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:08.881358  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:08.912007  620795 cri.go:89] found id: ""
	I1213 12:07:08.912038  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.912059  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:08.912070  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:08.912143  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:08.948718  620795 cri.go:89] found id: ""
	I1213 12:07:08.948744  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.948754  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:08.948760  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:08.948815  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:08.974195  620795 cri.go:89] found id: ""
	I1213 12:07:08.974224  620795 logs.go:282] 0 containers: []
	W1213 12:07:08.974234  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:08.974240  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:08.974298  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:09.000368  620795 cri.go:89] found id: ""
	I1213 12:07:09.000409  620795 logs.go:282] 0 containers: []
	W1213 12:07:09.000420  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:09.000428  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:09.000500  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:09.027504  620795 cri.go:89] found id: ""
	I1213 12:07:09.027539  620795 logs.go:282] 0 containers: []
	W1213 12:07:09.027548  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:09.027554  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:09.027611  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:09.052844  620795 cri.go:89] found id: ""
	I1213 12:07:09.052870  620795 logs.go:282] 0 containers: []
	W1213 12:07:09.052879  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:09.052888  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:09.052899  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:09.080443  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:09.080483  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:09.109721  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:09.109747  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:09.174545  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:09.174581  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:09.192943  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:09.192974  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:09.036994  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:11.537211  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:09.256162  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:09.248263    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.248774    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.250435    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.251054    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.252736    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:09.248263    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.248774    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.250435    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.251054    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:09.252736    9227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:11.756459  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:11.766714  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:11.766784  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:11.797701  620795 cri.go:89] found id: ""
	I1213 12:07:11.797728  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.797737  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:11.797753  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:11.797832  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:11.833489  620795 cri.go:89] found id: ""
	I1213 12:07:11.833563  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.833585  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:11.833604  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:11.833692  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:11.869283  620795 cri.go:89] found id: ""
	I1213 12:07:11.869305  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.869314  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:11.869320  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:11.869376  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:11.899820  620795 cri.go:89] found id: ""
	I1213 12:07:11.899845  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.899855  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:11.899862  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:11.899925  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:11.926125  620795 cri.go:89] found id: ""
	I1213 12:07:11.926150  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.926159  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:11.926166  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:11.926224  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:11.952049  620795 cri.go:89] found id: ""
	I1213 12:07:11.952131  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.952165  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:11.952178  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:11.952250  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:11.982382  620795 cri.go:89] found id: ""
	I1213 12:07:11.982407  620795 logs.go:282] 0 containers: []
	W1213 12:07:11.982415  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:11.982421  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:11.982494  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:12.014887  620795 cri.go:89] found id: ""
	I1213 12:07:12.014912  620795 logs.go:282] 0 containers: []
	W1213 12:07:12.014921  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:12.014931  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:12.014943  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:12.080370  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:12.080407  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:12.097493  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:12.097534  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:12.163658  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:12.155544    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.156277    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.157926    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.158224    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.159755    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:12.155544    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.156277    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.157926    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.158224    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:12.159755    9327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:12.163680  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:12.163692  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:12.192505  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:12.192544  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:07:14.037223  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:16.537169  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:14.721085  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:14.731999  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:14.732070  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:14.758997  620795 cri.go:89] found id: ""
	I1213 12:07:14.759023  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.759032  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:14.759039  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:14.759098  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:14.831264  620795 cri.go:89] found id: ""
	I1213 12:07:14.831294  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.831303  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:14.831310  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:14.831366  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:14.882934  620795 cri.go:89] found id: ""
	I1213 12:07:14.882964  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.882973  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:14.882980  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:14.883040  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:14.916858  620795 cri.go:89] found id: ""
	I1213 12:07:14.916888  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.916898  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:14.916905  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:14.916969  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:14.942297  620795 cri.go:89] found id: ""
	I1213 12:07:14.942334  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.942343  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:14.942355  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:14.942431  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:14.967905  620795 cri.go:89] found id: ""
	I1213 12:07:14.967927  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.967936  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:14.967942  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:14.968000  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:14.993041  620795 cri.go:89] found id: ""
	I1213 12:07:14.993107  620795 logs.go:282] 0 containers: []
	W1213 12:07:14.993131  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:14.993145  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:14.993224  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:15.027730  620795 cri.go:89] found id: ""
	I1213 12:07:15.027755  620795 logs.go:282] 0 containers: []
	W1213 12:07:15.027765  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:15.027776  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:15.027789  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:15.095470  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:15.095507  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:15.113485  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:15.113567  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:15.183456  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:15.174486    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.175343    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.177179    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.177821    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.179398    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:15.174486    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.175343    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.177179    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.177821    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:15.179398    9441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:15.183481  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:15.183497  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:15.212670  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:15.212706  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:17.745028  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:17.755868  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:17.755965  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:17.830528  620795 cri.go:89] found id: ""
	I1213 12:07:17.830551  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.830559  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:17.830585  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:17.830654  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:17.866003  620795 cri.go:89] found id: ""
	I1213 12:07:17.866029  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.866038  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:17.866044  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:17.866102  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:17.891564  620795 cri.go:89] found id: ""
	I1213 12:07:17.891588  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.891597  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:17.891603  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:17.891664  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:17.918740  620795 cri.go:89] found id: ""
	I1213 12:07:17.918768  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.918776  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:17.918783  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:17.918845  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:17.950736  620795 cri.go:89] found id: ""
	I1213 12:07:17.950774  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.950784  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:17.950790  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:17.950854  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:17.976775  620795 cri.go:89] found id: ""
	I1213 12:07:17.976799  620795 logs.go:282] 0 containers: []
	W1213 12:07:17.976809  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:17.976816  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:17.976883  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:18.008430  620795 cri.go:89] found id: ""
	I1213 12:07:18.008460  620795 logs.go:282] 0 containers: []
	W1213 12:07:18.008469  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:18.008477  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:18.008564  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:18.037446  620795 cri.go:89] found id: ""
	I1213 12:07:18.037477  620795 logs.go:282] 0 containers: []
	W1213 12:07:18.037488  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:18.037502  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:18.037517  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:18.068414  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:18.068443  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:18.138588  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:18.138627  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:18.155698  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:18.155729  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:18.222792  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:18.215479    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.215981    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.217571    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.217896    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.219409    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:18.215479    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.215981    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.217571    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.217896    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:18.219409    9565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:18.222835  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:18.222847  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:07:19.037064  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:21.536199  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:20.751476  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:20.762121  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:20.762190  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:20.818771  620795 cri.go:89] found id: ""
	I1213 12:07:20.818794  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.818803  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:20.818810  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:20.818877  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:20.873533  620795 cri.go:89] found id: ""
	I1213 12:07:20.873556  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.873564  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:20.873581  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:20.873639  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:20.900689  620795 cri.go:89] found id: ""
	I1213 12:07:20.900716  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.900725  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:20.900732  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:20.900790  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:20.926298  620795 cri.go:89] found id: ""
	I1213 12:07:20.926324  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.926334  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:20.926340  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:20.926400  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:20.955692  620795 cri.go:89] found id: ""
	I1213 12:07:20.955767  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.955789  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:20.955808  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:20.955904  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:20.981101  620795 cri.go:89] found id: ""
	I1213 12:07:20.981126  620795 logs.go:282] 0 containers: []
	W1213 12:07:20.981135  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:20.981146  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:20.981208  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:21.012906  620795 cri.go:89] found id: ""
	I1213 12:07:21.012933  620795 logs.go:282] 0 containers: []
	W1213 12:07:21.012942  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:21.012949  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:21.013024  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:21.043717  620795 cri.go:89] found id: ""
	I1213 12:07:21.043743  620795 logs.go:282] 0 containers: []
	W1213 12:07:21.043753  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:21.043764  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:21.043776  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:21.116319  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:21.116368  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:21.133173  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:21.133204  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:21.201103  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:21.193228    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.194101    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.195701    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.196170    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.197510    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:21.193228    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.194101    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.195701    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.196170    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:21.197510    9669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:21.201127  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:21.201140  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:21.229422  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:21.229457  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:23.763349  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:23.781088  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:23.781159  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:23.857623  620795 cri.go:89] found id: ""
	I1213 12:07:23.857648  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.857666  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:23.857673  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:23.857736  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:23.882807  620795 cri.go:89] found id: ""
	I1213 12:07:23.882833  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.882842  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:23.882849  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:23.882907  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:23.908402  620795 cri.go:89] found id: ""
	I1213 12:07:23.908430  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.908440  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:23.908447  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:23.908506  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:23.933800  620795 cri.go:89] found id: ""
	I1213 12:07:23.933826  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.933835  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:23.933841  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:23.933919  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:23.959222  620795 cri.go:89] found id: ""
	I1213 12:07:23.959248  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.959259  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:23.959266  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:23.959352  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:23.985470  620795 cri.go:89] found id: ""
	I1213 12:07:23.985496  620795 logs.go:282] 0 containers: []
	W1213 12:07:23.985505  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:23.985512  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:23.985570  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:24.014442  620795 cri.go:89] found id: ""
	I1213 12:07:24.014477  620795 logs.go:282] 0 containers: []
	W1213 12:07:24.014487  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:24.014494  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:24.014556  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:24.043282  620795 cri.go:89] found id: ""
	I1213 12:07:24.043308  620795 logs.go:282] 0 containers: []
	W1213 12:07:24.043318  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:24.043328  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:24.043340  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:24.075046  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:24.075073  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:24.143658  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:24.143701  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:24.160736  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:24.160765  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:24.224652  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:24.215949    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.216643    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.218385    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.218972    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.220693    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:24.215949    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.216643    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.218385    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.218972    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:24.220693    9801 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:24.224675  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:24.224692  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:07:23.536309  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:25.537129  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:28.037200  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:26.754848  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:26.765356  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:26.765429  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:26.818982  620795 cri.go:89] found id: ""
	I1213 12:07:26.819005  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.819013  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:26.819020  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:26.819078  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:26.871231  620795 cri.go:89] found id: ""
	I1213 12:07:26.871253  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.871262  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:26.871268  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:26.871326  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:26.898363  620795 cri.go:89] found id: ""
	I1213 12:07:26.898443  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.898467  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:26.898486  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:26.898578  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:26.923840  620795 cri.go:89] found id: ""
	I1213 12:07:26.923866  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.923875  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:26.923882  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:26.923940  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:26.952921  620795 cri.go:89] found id: ""
	I1213 12:07:26.952950  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.952960  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:26.952967  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:26.953028  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:26.984162  620795 cri.go:89] found id: ""
	I1213 12:07:26.984188  620795 logs.go:282] 0 containers: []
	W1213 12:07:26.984197  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:26.984203  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:26.984282  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:27.022329  620795 cri.go:89] found id: ""
	I1213 12:07:27.022397  620795 logs.go:282] 0 containers: []
	W1213 12:07:27.022413  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:27.022420  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:27.022479  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:27.048366  620795 cri.go:89] found id: ""
	I1213 12:07:27.048391  620795 logs.go:282] 0 containers: []
	W1213 12:07:27.048401  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:27.048410  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:27.048423  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:27.076996  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:27.077029  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:27.149458  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:27.149509  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:27.167444  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:27.167473  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:27.235232  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:27.227331    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.227820    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.229697    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.230220    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.231699    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:27.227331    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.227820    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.229697    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.230220    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:27.231699    9917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:27.235258  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:27.235270  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:07:30.537006  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:33.036221  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:29.764538  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:29.791446  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:29.791560  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:29.844876  620795 cri.go:89] found id: ""
	I1213 12:07:29.844953  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.844976  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:29.844996  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:29.845082  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:29.884357  620795 cri.go:89] found id: ""
	I1213 12:07:29.884423  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.884441  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:29.884449  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:29.884508  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:29.914712  620795 cri.go:89] found id: ""
	I1213 12:07:29.914738  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.914748  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:29.914755  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:29.914813  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:29.940420  620795 cri.go:89] found id: ""
	I1213 12:07:29.940500  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.940516  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:29.940524  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:29.940585  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:29.970378  620795 cri.go:89] found id: ""
	I1213 12:07:29.970404  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.970413  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:29.970420  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:29.970478  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:29.996803  620795 cri.go:89] found id: ""
	I1213 12:07:29.996881  620795 logs.go:282] 0 containers: []
	W1213 12:07:29.996898  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:29.996907  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:29.996983  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:30.040874  620795 cri.go:89] found id: ""
	I1213 12:07:30.040904  620795 logs.go:282] 0 containers: []
	W1213 12:07:30.040913  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:30.040920  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:30.040995  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:30.083632  620795 cri.go:89] found id: ""
	I1213 12:07:30.083658  620795 logs.go:282] 0 containers: []
	W1213 12:07:30.083667  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:30.083676  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:30.083689  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:30.149516  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:30.149553  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:30.167731  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:30.167816  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:30.233503  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:30.225039   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.225442   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.227057   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.227805   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.229579   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:30.225039   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.225442   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.227057   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.227805   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:30.229579   10018 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:30.233567  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:30.233586  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:30.263464  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:30.263497  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:32.796303  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:32.813180  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:32.813263  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:32.849335  620795 cri.go:89] found id: ""
	I1213 12:07:32.849413  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.849456  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:32.849481  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:32.849570  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:32.880068  620795 cri.go:89] found id: ""
	I1213 12:07:32.880092  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.880101  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:32.880107  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:32.880165  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:32.907166  620795 cri.go:89] found id: ""
	I1213 12:07:32.907193  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.907202  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:32.907209  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:32.907266  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:32.933296  620795 cri.go:89] found id: ""
	I1213 12:07:32.933366  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.933388  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:32.933407  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:32.933500  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:32.959040  620795 cri.go:89] found id: ""
	I1213 12:07:32.959106  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.959130  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:32.959149  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:32.959233  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:32.989508  620795 cri.go:89] found id: ""
	I1213 12:07:32.989531  620795 logs.go:282] 0 containers: []
	W1213 12:07:32.989540  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:32.989546  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:32.989629  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:33.018978  620795 cri.go:89] found id: ""
	I1213 12:07:33.019002  620795 logs.go:282] 0 containers: []
	W1213 12:07:33.019010  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:33.019017  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:33.019098  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:33.046327  620795 cri.go:89] found id: ""
	I1213 12:07:33.046359  620795 logs.go:282] 0 containers: []
	W1213 12:07:33.046368  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:33.046378  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:33.046419  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:33.075176  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:33.075213  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:33.107277  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:33.107309  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:33.174349  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:33.174384  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:33.192737  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:33.192770  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:33.259992  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:33.251960   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.252364   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.253955   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.254311   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.255985   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:33.251960   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.252364   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.253955   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.254311   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:33.255985   10140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:07:35.037005  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:37.037071  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:35.760267  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:35.771899  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:35.771965  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:35.816451  620795 cri.go:89] found id: ""
	I1213 12:07:35.816499  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.816508  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:35.816519  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:35.816576  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:35.874010  620795 cri.go:89] found id: ""
	I1213 12:07:35.874031  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.874040  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:35.874046  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:35.874109  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:35.901470  620795 cri.go:89] found id: ""
	I1213 12:07:35.901499  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.901509  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:35.901515  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:35.901577  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:35.929967  620795 cri.go:89] found id: ""
	I1213 12:07:35.929988  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.929997  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:35.930004  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:35.930061  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:35.959220  620795 cri.go:89] found id: ""
	I1213 12:07:35.959245  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.959255  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:35.959262  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:35.959323  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:35.988889  620795 cri.go:89] found id: ""
	I1213 12:07:35.988916  620795 logs.go:282] 0 containers: []
	W1213 12:07:35.988925  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:35.988932  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:35.988990  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:36.017868  620795 cri.go:89] found id: ""
	I1213 12:07:36.017896  620795 logs.go:282] 0 containers: []
	W1213 12:07:36.017906  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:36.017912  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:36.017975  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:36.046482  620795 cri.go:89] found id: ""
	I1213 12:07:36.046508  620795 logs.go:282] 0 containers: []
	W1213 12:07:36.046517  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:36.046527  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:36.046539  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:36.063480  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:36.063675  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:36.134374  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:36.125215   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.125817   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.127378   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.127950   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.129158   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:36.125215   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.125817   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.127378   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.127950   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:36.129158   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:36.134437  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:36.134465  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:36.164786  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:36.164831  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:36.195048  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:36.195077  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:38.762384  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:38.773774  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:38.773860  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:38.823096  620795 cri.go:89] found id: ""
	I1213 12:07:38.823118  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.823127  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:38.823133  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:38.823192  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:38.859735  620795 cri.go:89] found id: ""
	I1213 12:07:38.859758  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.859766  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:38.859773  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:38.859832  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:38.888780  620795 cri.go:89] found id: ""
	I1213 12:07:38.888806  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.888815  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:38.888821  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:38.888885  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:38.918480  620795 cri.go:89] found id: ""
	I1213 12:07:38.918506  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.918516  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:38.918522  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:38.918579  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:38.944442  620795 cri.go:89] found id: ""
	I1213 12:07:38.944475  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.944485  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:38.944492  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:38.944548  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:38.972111  620795 cri.go:89] found id: ""
	I1213 12:07:38.972138  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.972148  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:38.972156  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:38.972217  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:38.999220  620795 cri.go:89] found id: ""
	I1213 12:07:38.999249  620795 logs.go:282] 0 containers: []
	W1213 12:07:38.999259  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:38.999266  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:38.999387  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:39.027462  620795 cri.go:89] found id: ""
	I1213 12:07:39.027489  620795 logs.go:282] 0 containers: []
	W1213 12:07:39.027498  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:39.027508  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:39.027551  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:39.045387  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:39.045421  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:39.113555  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:39.104411   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.105461   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.106402   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.108045   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.108696   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:39.104411   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.105461   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.106402   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.108045   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:39.108696   10354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:39.113577  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:39.113591  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:39.141868  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:39.141905  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:39.170660  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:39.170687  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:07:39.536473  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:41.536533  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:41.738914  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:41.749712  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:41.749788  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:41.815733  620795 cri.go:89] found id: ""
	I1213 12:07:41.815757  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.815767  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:41.815774  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:41.815837  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:41.853772  620795 cri.go:89] found id: ""
	I1213 12:07:41.853794  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.853802  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:41.853808  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:41.853864  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:41.880989  620795 cri.go:89] found id: ""
	I1213 12:07:41.881012  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.881021  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:41.881027  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:41.881085  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:41.910432  620795 cri.go:89] found id: ""
	I1213 12:07:41.910455  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.910464  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:41.910470  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:41.910525  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:41.938539  620795 cri.go:89] found id: ""
	I1213 12:07:41.938561  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.938570  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:41.938576  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:41.938636  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:41.964574  620795 cri.go:89] found id: ""
	I1213 12:07:41.964608  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.964617  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:41.964624  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:41.964681  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:41.989355  620795 cri.go:89] found id: ""
	I1213 12:07:41.989380  620795 logs.go:282] 0 containers: []
	W1213 12:07:41.989389  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:41.989396  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:41.989456  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:42.019802  620795 cri.go:89] found id: ""
	I1213 12:07:42.019830  620795 logs.go:282] 0 containers: []
	W1213 12:07:42.019839  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:42.019849  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:42.019861  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:42.052058  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:42.052087  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:42.123300  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:42.123360  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:42.144729  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:42.144768  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:42.227868  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:42.217286   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.218234   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.220463   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.221227   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.223007   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:42.217286   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.218234   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.220463   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.221227   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:42.223007   10478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:42.227896  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:42.227910  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:07:44.037002  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:46.037183  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:44.760193  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:44.770916  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:44.770989  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:44.803100  620795 cri.go:89] found id: ""
	I1213 12:07:44.803124  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.803133  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:44.803140  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:44.803195  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:44.851212  620795 cri.go:89] found id: ""
	I1213 12:07:44.851235  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.851244  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:44.851250  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:44.851307  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:44.902052  620795 cri.go:89] found id: ""
	I1213 12:07:44.902075  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.902084  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:44.902090  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:44.902150  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:44.933898  620795 cri.go:89] found id: ""
	I1213 12:07:44.933926  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.933935  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:44.933942  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:44.934026  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:44.963132  620795 cri.go:89] found id: ""
	I1213 12:07:44.963158  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.963167  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:44.963174  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:44.963261  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:44.988132  620795 cri.go:89] found id: ""
	I1213 12:07:44.988163  620795 logs.go:282] 0 containers: []
	W1213 12:07:44.988174  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:44.988181  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:44.988238  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:45.046906  620795 cri.go:89] found id: ""
	I1213 12:07:45.046934  620795 logs.go:282] 0 containers: []
	W1213 12:07:45.046943  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:45.046951  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:45.047019  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:45.080632  620795 cri.go:89] found id: ""
	I1213 12:07:45.080730  620795 logs.go:282] 0 containers: []
	W1213 12:07:45.080752  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:45.080792  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:45.080810  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:45.157685  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:45.157797  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:45.212507  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:45.212574  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:45.292666  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:45.284764   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.285529   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.287091   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.287398   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.288940   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:45.284764   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.285529   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.287091   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.287398   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:45.288940   10583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:45.292707  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:45.292720  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:45.321658  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:45.321690  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:47.858977  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:47.870353  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:47.870425  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:47.902849  620795 cri.go:89] found id: ""
	I1213 12:07:47.902874  620795 logs.go:282] 0 containers: []
	W1213 12:07:47.902883  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:47.902890  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:47.902958  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:47.928841  620795 cri.go:89] found id: ""
	I1213 12:07:47.928866  620795 logs.go:282] 0 containers: []
	W1213 12:07:47.928875  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:47.928882  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:47.928943  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:47.954469  620795 cri.go:89] found id: ""
	I1213 12:07:47.954494  620795 logs.go:282] 0 containers: []
	W1213 12:07:47.954503  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:47.954510  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:47.954571  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:47.984225  620795 cri.go:89] found id: ""
	I1213 12:07:47.984248  620795 logs.go:282] 0 containers: []
	W1213 12:07:47.984257  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:47.984263  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:47.984327  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:48.013666  620795 cri.go:89] found id: ""
	I1213 12:07:48.013694  620795 logs.go:282] 0 containers: []
	W1213 12:07:48.013704  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:48.013710  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:48.013776  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:48.043313  620795 cri.go:89] found id: ""
	I1213 12:07:48.043341  620795 logs.go:282] 0 containers: []
	W1213 12:07:48.043351  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:48.043358  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:48.043445  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:48.070641  620795 cri.go:89] found id: ""
	I1213 12:07:48.070669  620795 logs.go:282] 0 containers: []
	W1213 12:07:48.070680  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:48.070687  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:48.070767  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:48.096729  620795 cri.go:89] found id: ""
	I1213 12:07:48.096754  620795 logs.go:282] 0 containers: []
	W1213 12:07:48.096764  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:48.096773  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:48.096785  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:48.129289  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:48.129318  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:48.196743  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:48.196781  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:48.213775  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:48.213802  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:48.282000  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:48.273477   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.274412   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.276291   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.276931   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.278357   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:48.273477   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.274412   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.276291   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.276931   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:48.278357   10706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:48.282076  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:48.282104  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1213 12:07:48.537001  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:50.537083  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:53.037078  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:50.813946  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:50.834838  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:50.834928  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:50.871307  620795 cri.go:89] found id: ""
	I1213 12:07:50.871329  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.871337  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:50.871343  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:50.871400  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:50.900887  620795 cri.go:89] found id: ""
	I1213 12:07:50.900913  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.900922  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:50.900929  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:50.900987  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:50.926497  620795 cri.go:89] found id: ""
	I1213 12:07:50.926569  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.926606  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:50.926631  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:50.926721  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:50.954230  620795 cri.go:89] found id: ""
	I1213 12:07:50.954256  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.954266  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:50.954273  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:50.954331  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:50.980389  620795 cri.go:89] found id: ""
	I1213 12:07:50.980414  620795 logs.go:282] 0 containers: []
	W1213 12:07:50.980425  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:50.980431  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:50.980490  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:51.007396  620795 cri.go:89] found id: ""
	I1213 12:07:51.007423  620795 logs.go:282] 0 containers: []
	W1213 12:07:51.007433  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:51.007444  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:51.007507  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:51.038515  620795 cri.go:89] found id: ""
	I1213 12:07:51.038540  620795 logs.go:282] 0 containers: []
	W1213 12:07:51.038550  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:51.038556  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:51.038611  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:51.066063  620795 cri.go:89] found id: ""
	I1213 12:07:51.066088  620795 logs.go:282] 0 containers: []
	W1213 12:07:51.066096  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:51.066111  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:51.066122  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:51.131363  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:51.131402  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:51.148223  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:51.148253  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:51.211768  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:51.204250   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.204888   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.206374   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.206860   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.208288   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:51.204250   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.204888   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.206374   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.206860   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:51.208288   10811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:51.211791  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:51.211807  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:51.239792  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:51.239825  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:53.772909  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:53.794190  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:53.794255  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:53.863195  620795 cri.go:89] found id: ""
	I1213 12:07:53.863228  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.863239  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:53.863246  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:53.863323  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:53.894744  620795 cri.go:89] found id: ""
	I1213 12:07:53.894812  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.894836  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:53.894855  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:53.894941  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:53.922176  620795 cri.go:89] found id: ""
	I1213 12:07:53.922244  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.922266  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:53.922284  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:53.922371  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:53.948409  620795 cri.go:89] found id: ""
	I1213 12:07:53.948437  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.948446  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:53.948453  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:53.948512  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:53.974142  620795 cri.go:89] found id: ""
	I1213 12:07:53.974222  620795 logs.go:282] 0 containers: []
	W1213 12:07:53.974244  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:53.974263  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:53.974369  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:54.002307  620795 cri.go:89] found id: ""
	I1213 12:07:54.002343  620795 logs.go:282] 0 containers: []
	W1213 12:07:54.002353  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:54.002361  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:54.002440  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:54.030334  620795 cri.go:89] found id: ""
	I1213 12:07:54.030413  620795 logs.go:282] 0 containers: []
	W1213 12:07:54.030438  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:54.030457  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:54.030566  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:54.056614  620795 cri.go:89] found id: ""
	I1213 12:07:54.056697  620795 logs.go:282] 0 containers: []
	W1213 12:07:54.056713  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:54.056724  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:54.056737  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:54.124215  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:54.124253  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:07:54.141024  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:54.141052  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:54.203423  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:54.195491   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.196247   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.197856   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.198486   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.200023   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:54.195491   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.196247   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.197856   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.198486   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:54.200023   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:54.203445  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:54.203457  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:54.231323  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:54.231355  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:07:55.037200  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:07:57.537019  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:56.762827  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:56.786084  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:56.786208  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:56.855486  620795 cri.go:89] found id: ""
	I1213 12:07:56.855531  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.855542  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:56.855549  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:56.855615  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:56.883436  620795 cri.go:89] found id: ""
	I1213 12:07:56.883531  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.883557  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:56.883587  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:56.883648  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:56.908626  620795 cri.go:89] found id: ""
	I1213 12:07:56.908708  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.908739  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:56.908752  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:56.908821  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:56.935174  620795 cri.go:89] found id: ""
	I1213 12:07:56.935201  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.935210  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:56.935217  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:56.935302  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:56.964101  620795 cri.go:89] found id: ""
	I1213 12:07:56.964128  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.964139  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:56.964146  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:56.964232  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:56.989991  620795 cri.go:89] found id: ""
	I1213 12:07:56.990016  620795 logs.go:282] 0 containers: []
	W1213 12:07:56.990025  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:56.990032  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:56.990117  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:07:57.021908  620795 cri.go:89] found id: ""
	I1213 12:07:57.021934  620795 logs.go:282] 0 containers: []
	W1213 12:07:57.021944  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:07:57.021952  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:07:57.022015  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:07:57.050893  620795 cri.go:89] found id: ""
	I1213 12:07:57.050919  620795 logs.go:282] 0 containers: []
	W1213 12:07:57.050929  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:07:57.050939  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:07:57.050958  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:07:57.114649  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:07:57.107304   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.107896   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.109344   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.109787   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.111210   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:07:57.107304   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.107896   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.109344   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.109787   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:07:57.111210   11027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:07:57.114709  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:07:57.114743  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:07:57.142743  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:07:57.142778  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:07:57.171088  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:07:57.171120  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:07:57.236905  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:07:57.236948  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:08:00.039297  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:02.536522  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:07:59.754255  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:07:59.764877  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:07:59.764948  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:07:59.800655  620795 cri.go:89] found id: ""
	I1213 12:07:59.800682  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.800691  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:07:59.800698  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:07:59.800757  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:07:59.844261  620795 cri.go:89] found id: ""
	I1213 12:07:59.844289  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.844299  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:07:59.844305  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:07:59.844363  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:07:59.890278  620795 cri.go:89] found id: ""
	I1213 12:07:59.890303  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.890313  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:07:59.890319  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:07:59.890379  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:07:59.918606  620795 cri.go:89] found id: ""
	I1213 12:07:59.918632  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.918641  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:07:59.918647  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:07:59.918703  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:07:59.947895  620795 cri.go:89] found id: ""
	I1213 12:07:59.947918  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.947928  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:07:59.947934  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:07:59.947993  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:07:59.973045  620795 cri.go:89] found id: ""
	I1213 12:07:59.973073  620795 logs.go:282] 0 containers: []
	W1213 12:07:59.973082  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:07:59.973089  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:07:59.973163  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:00.009231  620795 cri.go:89] found id: ""
	I1213 12:08:00.009320  620795 logs.go:282] 0 containers: []
	W1213 12:08:00.009353  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:00.009374  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:00.009507  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:00.119476  620795 cri.go:89] found id: ""
	I1213 12:08:00.119618  620795 logs.go:282] 0 containers: []
	W1213 12:08:00.119644  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:00.119687  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:00.119721  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:00.145226  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:00.145450  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:00.282893  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:00.266048   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.266988   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.274032   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.274509   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.276639   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:00.266048   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.266988   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.274032   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.274509   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:00.276639   11140 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:00.282923  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:00.282944  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:00.371336  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:00.371439  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:00.430461  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:00.430503  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:03.002113  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:03.014603  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:03.014679  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:03.042673  620795 cri.go:89] found id: ""
	I1213 12:08:03.042701  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.042711  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:03.042718  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:03.042778  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:03.074056  620795 cri.go:89] found id: ""
	I1213 12:08:03.074133  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.074164  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:03.074185  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:03.074301  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:03.101450  620795 cri.go:89] found id: ""
	I1213 12:08:03.101485  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.101495  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:03.101502  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:03.101564  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:03.132013  620795 cri.go:89] found id: ""
	I1213 12:08:03.132042  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.132053  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:03.132060  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:03.132123  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:03.158035  620795 cri.go:89] found id: ""
	I1213 12:08:03.158057  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.158067  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:03.158074  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:03.158131  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:03.183772  620795 cri.go:89] found id: ""
	I1213 12:08:03.183800  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.183809  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:03.183816  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:03.183879  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:03.209685  620795 cri.go:89] found id: ""
	I1213 12:08:03.209710  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.209718  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:03.209725  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:03.209809  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:03.238718  620795 cri.go:89] found id: ""
	I1213 12:08:03.238742  620795 logs.go:282] 0 containers: []
	W1213 12:08:03.238751  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:03.238760  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:03.238771  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:03.266176  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:03.266211  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:03.295327  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:03.295357  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:03.371751  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:03.371796  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:03.388535  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:03.388569  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:03.455075  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:03.446801   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.447400   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.448900   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.449492   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.451125   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:03.446801   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.447400   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.448900   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.449492   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:03.451125   11268 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:08:05.037001  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:07.037153  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:05.956468  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:05.967247  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:05.967349  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:05.992470  620795 cri.go:89] found id: ""
	I1213 12:08:05.992495  620795 logs.go:282] 0 containers: []
	W1213 12:08:05.992504  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:05.992510  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:05.992576  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:06.025309  620795 cri.go:89] found id: ""
	I1213 12:08:06.025339  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.025349  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:06.025356  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:06.025417  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:06.056164  620795 cri.go:89] found id: ""
	I1213 12:08:06.056192  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.056202  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:06.056208  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:06.056268  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:06.091020  620795 cri.go:89] found id: ""
	I1213 12:08:06.091047  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.091057  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:06.091063  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:06.091124  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:06.117741  620795 cri.go:89] found id: ""
	I1213 12:08:06.117767  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.117776  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:06.117792  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:06.117850  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:06.143430  620795 cri.go:89] found id: ""
	I1213 12:08:06.143454  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.143465  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:06.143472  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:06.143558  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:06.169857  620795 cri.go:89] found id: ""
	I1213 12:08:06.169883  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.169892  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:06.169899  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:06.169959  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:06.196298  620795 cri.go:89] found id: ""
	I1213 12:08:06.196325  620795 logs.go:282] 0 containers: []
	W1213 12:08:06.196335  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:06.196344  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:06.196385  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:06.212572  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:06.212599  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:06.278450  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:06.270268   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.270834   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.272354   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.273016   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.274527   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:06.270268   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.270834   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.272354   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.273016   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:06.274527   11366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:06.278473  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:06.278485  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:06.306640  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:06.306679  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:06.336266  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:06.336295  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:08.901791  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:08.912829  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:08.912897  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:08.942435  620795 cri.go:89] found id: ""
	I1213 12:08:08.942467  620795 logs.go:282] 0 containers: []
	W1213 12:08:08.942476  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:08.942483  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:08.942552  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:08.968397  620795 cri.go:89] found id: ""
	I1213 12:08:08.968475  620795 logs.go:282] 0 containers: []
	W1213 12:08:08.968508  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:08.968533  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:08.968615  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:08.995667  620795 cri.go:89] found id: ""
	I1213 12:08:08.995734  620795 logs.go:282] 0 containers: []
	W1213 12:08:08.995757  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:08.995776  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:08.995851  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:09.026748  620795 cri.go:89] found id: ""
	I1213 12:08:09.026827  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.026859  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:09.026878  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:09.026961  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:09.052881  620795 cri.go:89] found id: ""
	I1213 12:08:09.052910  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.052919  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:09.052926  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:09.053016  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:09.079635  620795 cri.go:89] found id: ""
	I1213 12:08:09.079663  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.079673  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:09.079679  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:09.079740  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:09.106465  620795 cri.go:89] found id: ""
	I1213 12:08:09.106499  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.106507  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:09.106529  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:09.106610  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:09.132296  620795 cri.go:89] found id: ""
	I1213 12:08:09.132373  620795 logs.go:282] 0 containers: []
	W1213 12:08:09.132389  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:09.132400  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:09.132411  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:09.198891  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:09.198937  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:09.215689  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:09.215718  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:09.536381  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:11.536495  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:09.283376  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:09.275383   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.276074   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.277779   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.278245   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.279888   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:09.275383   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.276074   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.277779   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.278245   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:09.279888   11484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:09.283399  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:09.283412  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:09.311953  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:09.311995  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:11.844673  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:11.854957  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:11.855031  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:11.884334  620795 cri.go:89] found id: ""
	I1213 12:08:11.884361  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.884370  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:11.884377  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:11.884438  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:11.911693  620795 cri.go:89] found id: ""
	I1213 12:08:11.911715  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.911724  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:11.911730  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:11.911785  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:11.939653  620795 cri.go:89] found id: ""
	I1213 12:08:11.939679  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.939688  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:11.939694  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:11.939753  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:11.965596  620795 cri.go:89] found id: ""
	I1213 12:08:11.965622  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.965631  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:11.965639  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:11.965695  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:11.994822  620795 cri.go:89] found id: ""
	I1213 12:08:11.994848  620795 logs.go:282] 0 containers: []
	W1213 12:08:11.994857  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:11.994863  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:11.994921  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:12.027085  620795 cri.go:89] found id: ""
	I1213 12:08:12.027111  620795 logs.go:282] 0 containers: []
	W1213 12:08:12.027119  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:12.027127  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:12.027189  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:12.060592  620795 cri.go:89] found id: ""
	I1213 12:08:12.060621  620795 logs.go:282] 0 containers: []
	W1213 12:08:12.060631  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:12.060637  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:12.060695  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:12.087001  620795 cri.go:89] found id: ""
	I1213 12:08:12.087026  620795 logs.go:282] 0 containers: []
	W1213 12:08:12.087035  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:12.087046  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:12.087057  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:12.154968  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:12.155007  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:12.173266  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:12.173296  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:12.238320  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:12.230047   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.230756   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.232467   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.233052   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.234716   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:12.230047   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.230756   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.232467   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.233052   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:12.234716   11601 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:12.238342  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:12.238353  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:12.266852  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:12.266886  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1213 12:08:14.037082  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:16.537099  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:14.799502  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:14.811316  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:14.811495  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:14.868310  620795 cri.go:89] found id: ""
	I1213 12:08:14.868404  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.868430  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:14.868485  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:14.868662  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:14.910677  620795 cri.go:89] found id: ""
	I1213 12:08:14.910744  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.910766  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:14.910785  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:14.910872  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:14.939727  620795 cri.go:89] found id: ""
	I1213 12:08:14.939767  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.939777  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:14.939783  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:14.939849  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:14.966035  620795 cri.go:89] found id: ""
	I1213 12:08:14.966069  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.966078  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:14.966086  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:14.966160  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:14.994530  620795 cri.go:89] found id: ""
	I1213 12:08:14.994596  620795 logs.go:282] 0 containers: []
	W1213 12:08:14.994619  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:14.994641  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:14.994727  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:15.032176  620795 cri.go:89] found id: ""
	I1213 12:08:15.032213  620795 logs.go:282] 0 containers: []
	W1213 12:08:15.032223  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:15.032230  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:15.032294  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:15.063866  620795 cri.go:89] found id: ""
	I1213 12:08:15.063900  620795 logs.go:282] 0 containers: []
	W1213 12:08:15.063910  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:15.063916  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:15.063977  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:15.094824  620795 cri.go:89] found id: ""
	I1213 12:08:15.094857  620795 logs.go:282] 0 containers: []
	W1213 12:08:15.094867  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:15.094876  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:15.094888  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:15.123857  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:15.123926  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:15.189408  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:15.189444  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:15.208112  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:15.208143  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:15.272770  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:15.265015   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.265421   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.266883   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.267540   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.269262   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:15.265015   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.265421   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.266883   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.267540   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:15.269262   11727 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:15.272794  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:15.272806  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:17.802242  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:17.818907  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:17.818976  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:17.860553  620795 cri.go:89] found id: ""
	I1213 12:08:17.860577  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.860586  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:17.860594  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:17.860663  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:17.890844  620795 cri.go:89] found id: ""
	I1213 12:08:17.890868  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.890877  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:17.890883  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:17.890937  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:17.916758  620795 cri.go:89] found id: ""
	I1213 12:08:17.916784  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.916794  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:17.916800  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:17.916860  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:17.946527  620795 cri.go:89] found id: ""
	I1213 12:08:17.946564  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.946573  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:17.946598  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:17.946684  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:17.971981  620795 cri.go:89] found id: ""
	I1213 12:08:17.972004  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.972013  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:17.972020  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:17.972075  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:17.997005  620795 cri.go:89] found id: ""
	I1213 12:08:17.997042  620795 logs.go:282] 0 containers: []
	W1213 12:08:17.997052  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:17.997059  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:17.997126  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:18.029007  620795 cri.go:89] found id: ""
	I1213 12:08:18.029038  620795 logs.go:282] 0 containers: []
	W1213 12:08:18.029054  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:18.029061  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:18.029120  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:18.056596  620795 cri.go:89] found id: ""
	I1213 12:08:18.056625  620795 logs.go:282] 0 containers: []
	W1213 12:08:18.056637  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:18.056647  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:18.056661  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:18.074846  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:18.074874  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:18.144092  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:18.136489   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.137142   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.138620   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.139127   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.140582   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:18.136489   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.137142   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.138620   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.139127   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:18.140582   11831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:18.144157  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:18.144176  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:18.173096  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:18.173134  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:18.208914  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:18.208943  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:08:19.037143  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:21.537005  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:20.774528  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:20.788572  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:20.788639  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:20.858764  620795 cri.go:89] found id: ""
	I1213 12:08:20.858786  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.858794  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:20.858800  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:20.858857  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:20.887866  620795 cri.go:89] found id: ""
	I1213 12:08:20.887888  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.887897  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:20.887904  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:20.887967  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:20.918367  620795 cri.go:89] found id: ""
	I1213 12:08:20.918438  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.918462  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:20.918481  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:20.918566  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:20.943267  620795 cri.go:89] found id: ""
	I1213 12:08:20.943292  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.943301  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:20.943308  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:20.943362  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:20.972672  620795 cri.go:89] found id: ""
	I1213 12:08:20.972707  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.972716  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:20.972723  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:20.972781  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:20.997368  620795 cri.go:89] found id: ""
	I1213 12:08:20.997394  620795 logs.go:282] 0 containers: []
	W1213 12:08:20.997404  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:20.997411  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:20.997487  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:21.029283  620795 cri.go:89] found id: ""
	I1213 12:08:21.029309  620795 logs.go:282] 0 containers: []
	W1213 12:08:21.029319  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:21.029328  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:21.029382  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:21.054485  620795 cri.go:89] found id: ""
	I1213 12:08:21.054510  620795 logs.go:282] 0 containers: []
	W1213 12:08:21.054520  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:21.054529  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:21.054540  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:21.121036  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:21.121073  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:21.137498  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:21.137526  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:21.201021  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:21.192527   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.193441   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.195064   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.195396   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.196967   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:21.192527   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.193441   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.195064   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.195396   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:21.196967   11945 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:21.201047  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:21.201060  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:21.233120  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:21.233155  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:23.768528  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:23.784788  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:23.784875  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:23.861902  620795 cri.go:89] found id: ""
	I1213 12:08:23.861933  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.861949  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:23.861956  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:23.862019  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:23.890007  620795 cri.go:89] found id: ""
	I1213 12:08:23.890029  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.890038  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:23.890044  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:23.890104  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:23.915427  620795 cri.go:89] found id: ""
	I1213 12:08:23.915450  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.915459  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:23.915465  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:23.915550  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:23.941041  620795 cri.go:89] found id: ""
	I1213 12:08:23.941069  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.941078  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:23.941085  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:23.941141  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:23.966860  620795 cri.go:89] found id: ""
	I1213 12:08:23.966886  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.966895  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:23.966902  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:23.966958  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:23.992499  620795 cri.go:89] found id: ""
	I1213 12:08:23.992528  620795 logs.go:282] 0 containers: []
	W1213 12:08:23.992537  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:23.992558  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:23.992616  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:24.019996  620795 cri.go:89] found id: ""
	I1213 12:08:24.020030  620795 logs.go:282] 0 containers: []
	W1213 12:08:24.020045  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:24.020052  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:24.020129  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:24.047181  620795 cri.go:89] found id: ""
	I1213 12:08:24.047216  620795 logs.go:282] 0 containers: []
	W1213 12:08:24.047225  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:24.047234  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:24.047245  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:24.110372  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:24.102615   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.103224   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.104739   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.105164   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.106663   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:24.102615   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.103224   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.104739   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.105164   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:24.106663   12051 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:24.110398  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:24.110412  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:24.139714  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:24.139748  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:24.172397  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:24.172426  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:08:24.037139  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:26.537138  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:24.240938  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:24.240975  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:26.757922  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:26.771140  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:26.771256  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:26.808049  620795 cri.go:89] found id: ""
	I1213 12:08:26.808124  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.808149  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:26.808169  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:26.808258  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:26.845750  620795 cri.go:89] found id: ""
	I1213 12:08:26.845826  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.845851  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:26.845870  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:26.845951  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:26.885327  620795 cri.go:89] found id: ""
	I1213 12:08:26.885401  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.885424  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:26.885444  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:26.885533  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:26.912813  620795 cri.go:89] found id: ""
	I1213 12:08:26.912844  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.912853  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:26.912860  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:26.912917  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:26.940224  620795 cri.go:89] found id: ""
	I1213 12:08:26.940301  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.940317  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:26.940325  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:26.940383  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:26.970684  620795 cri.go:89] found id: ""
	I1213 12:08:26.970728  620795 logs.go:282] 0 containers: []
	W1213 12:08:26.970738  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:26.970745  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:26.970825  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:27.001739  620795 cri.go:89] found id: ""
	I1213 12:08:27.001821  620795 logs.go:282] 0 containers: []
	W1213 12:08:27.001846  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:27.001867  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:27.001968  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:27.029502  620795 cri.go:89] found id: ""
	I1213 12:08:27.029525  620795 logs.go:282] 0 containers: []
	W1213 12:08:27.029533  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:27.029542  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:27.029561  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:27.097411  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:27.090200   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.090583   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.092154   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.092579   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.093994   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:27.090200   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.090583   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.092154   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.092579   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:27.093994   12162 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:27.097433  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:27.097445  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:27.126207  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:27.126242  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:27.152776  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:27.152814  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:27.218430  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:27.218466  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:08:29.036447  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:31.536317  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:29.735087  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:29.746276  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:29.746353  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:29.790488  620795 cri.go:89] found id: ""
	I1213 12:08:29.790563  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.790587  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:29.790607  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:29.790694  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:29.863661  620795 cri.go:89] found id: ""
	I1213 12:08:29.863730  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.863747  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:29.863754  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:29.863822  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:29.889696  620795 cri.go:89] found id: ""
	I1213 12:08:29.889723  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.889731  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:29.889738  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:29.889793  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:29.917557  620795 cri.go:89] found id: ""
	I1213 12:08:29.917619  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.917642  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:29.917657  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:29.917732  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:29.941179  620795 cri.go:89] found id: ""
	I1213 12:08:29.941201  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.941210  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:29.941217  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:29.941276  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:29.965683  620795 cri.go:89] found id: ""
	I1213 12:08:29.965758  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.965775  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:29.965783  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:29.965858  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:29.994076  620795 cri.go:89] found id: ""
	I1213 12:08:29.994111  620795 logs.go:282] 0 containers: []
	W1213 12:08:29.994121  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:29.994127  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:29.994189  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:30.034696  620795 cri.go:89] found id: ""
	I1213 12:08:30.034723  620795 logs.go:282] 0 containers: []
	W1213 12:08:30.034733  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:30.034743  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:30.034756  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:30.103277  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:30.103319  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:30.120811  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:30.120901  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:30.194375  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:30.185897   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.186387   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.187817   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.188577   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.190599   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:30.185897   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.186387   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.187817   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.188577   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:30.190599   12282 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:30.194399  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:30.194412  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:30.225794  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:30.225830  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:32.757391  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:32.768065  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:32.768178  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:32.801083  620795 cri.go:89] found id: ""
	I1213 12:08:32.801105  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.801114  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:32.801123  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:32.801179  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:32.839546  620795 cri.go:89] found id: ""
	I1213 12:08:32.839567  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.839576  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:32.839582  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:32.839637  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:32.888939  620795 cri.go:89] found id: ""
	I1213 12:08:32.889005  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.889029  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:32.889044  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:32.889115  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:32.926624  620795 cri.go:89] found id: ""
	I1213 12:08:32.926651  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.926666  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:32.926676  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:32.926752  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:32.958800  620795 cri.go:89] found id: ""
	I1213 12:08:32.958835  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.958844  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:32.958850  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:32.958916  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:32.989617  620795 cri.go:89] found id: ""
	I1213 12:08:32.989692  620795 logs.go:282] 0 containers: []
	W1213 12:08:32.989708  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:32.989721  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:32.989791  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:33.017551  620795 cri.go:89] found id: ""
	I1213 12:08:33.017623  620795 logs.go:282] 0 containers: []
	W1213 12:08:33.017647  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:33.017659  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:33.017736  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:33.043587  620795 cri.go:89] found id: ""
	I1213 12:08:33.043612  620795 logs.go:282] 0 containers: []
	W1213 12:08:33.043621  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:33.043632  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:33.043644  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:33.114830  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:33.105828   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.106521   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.108296   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.108871   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.110537   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:33.105828   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.106521   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.108296   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.108871   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:33.110537   12390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:33.114904  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:33.114923  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:33.144060  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:33.144098  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:33.174527  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:33.174559  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:33.242589  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:33.242622  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1213 12:08:33.536995  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:35.537098  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:38.037111  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:35.760100  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:35.770376  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:35.770444  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:35.803335  620795 cri.go:89] found id: ""
	I1213 12:08:35.803356  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.803365  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:35.803371  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:35.803427  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:35.837892  620795 cri.go:89] found id: ""
	I1213 12:08:35.837916  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.837926  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:35.837933  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:35.837989  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:35.866561  620795 cri.go:89] found id: ""
	I1213 12:08:35.866588  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.866598  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:35.866605  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:35.866667  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:35.892759  620795 cri.go:89] found id: ""
	I1213 12:08:35.892795  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.892804  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:35.892810  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:35.892880  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:35.923215  620795 cri.go:89] found id: ""
	I1213 12:08:35.923238  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.923247  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:35.923252  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:35.923310  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:35.950448  620795 cri.go:89] found id: ""
	I1213 12:08:35.950475  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.950484  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:35.950491  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:35.950546  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:35.976121  620795 cri.go:89] found id: ""
	I1213 12:08:35.976149  620795 logs.go:282] 0 containers: []
	W1213 12:08:35.976158  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:35.976165  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:35.976247  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:36.007726  620795 cri.go:89] found id: ""
	I1213 12:08:36.007754  620795 logs.go:282] 0 containers: []
	W1213 12:08:36.007765  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:36.007774  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:36.007789  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:36.085423  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:36.085465  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:36.104590  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:36.104621  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:36.174734  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:36.166755   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.167389   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.169214   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.169622   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.171073   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:36.166755   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.167389   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.169214   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.169622   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:36.171073   12510 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:36.174757  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:36.174771  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:36.204232  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:36.204271  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:38.733384  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:38.744052  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:38.744118  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:38.780661  620795 cri.go:89] found id: ""
	I1213 12:08:38.780685  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.780694  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:38.780704  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:38.780764  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:38.822383  620795 cri.go:89] found id: ""
	I1213 12:08:38.822407  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.822416  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:38.822422  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:38.822477  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:38.855498  620795 cri.go:89] found id: ""
	I1213 12:08:38.855544  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.855553  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:38.855565  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:38.855619  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:38.885018  620795 cri.go:89] found id: ""
	I1213 12:08:38.885045  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.885055  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:38.885062  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:38.885119  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:38.910126  620795 cri.go:89] found id: ""
	I1213 12:08:38.910162  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.910172  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:38.910179  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:38.910246  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:38.940467  620795 cri.go:89] found id: ""
	I1213 12:08:38.940502  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.940513  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:38.940520  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:38.940597  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:38.966188  620795 cri.go:89] found id: ""
	I1213 12:08:38.966222  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.966232  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:38.966238  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:38.966303  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:38.995881  620795 cri.go:89] found id: ""
	I1213 12:08:38.995907  620795 logs.go:282] 0 containers: []
	W1213 12:08:38.995917  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:38.995927  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:38.995939  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:39.015887  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:39.015917  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:39.098130  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:39.090344   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.090891   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.092783   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.093197   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.094699   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:39.090344   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.090891   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.092783   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.093197   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:39.094699   12614 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:39.098150  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:39.098163  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:39.126236  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:39.126269  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:39.153815  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:39.153842  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:08:40.037886  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:42.536996  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:41.721729  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:41.732158  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:41.732229  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:41.760995  620795 cri.go:89] found id: ""
	I1213 12:08:41.761017  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.761026  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:41.761033  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:41.761087  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:41.795082  620795 cri.go:89] found id: ""
	I1213 12:08:41.795105  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.795113  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:41.795119  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:41.795184  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:41.825959  620795 cri.go:89] found id: ""
	I1213 12:08:41.826033  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.826056  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:41.826076  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:41.826159  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:41.852118  620795 cri.go:89] found id: ""
	I1213 12:08:41.852183  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.852198  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:41.852205  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:41.852261  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:41.877587  620795 cri.go:89] found id: ""
	I1213 12:08:41.877626  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.877636  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:41.877642  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:41.877706  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:41.906166  620795 cri.go:89] found id: ""
	I1213 12:08:41.906192  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.906202  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:41.906216  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:41.906273  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:41.935663  620795 cri.go:89] found id: ""
	I1213 12:08:41.935688  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.935697  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:41.935704  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:41.935761  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:41.960919  620795 cri.go:89] found id: ""
	I1213 12:08:41.960943  620795 logs.go:282] 0 containers: []
	W1213 12:08:41.960952  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:41.960960  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:41.960971  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:41.989438  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:41.989472  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:42.026694  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:42.026779  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:42.120242  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:42.120297  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:42.141212  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:42.141246  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:42.216949  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:42.207789   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.208642   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.210144   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.210924   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.212786   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:42.207789   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.208642   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.210144   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.210924   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:42.212786   12749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:08:44.537110  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:47.036204  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:44.717236  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:44.728891  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:44.728977  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:44.753976  620795 cri.go:89] found id: ""
	I1213 12:08:44.754000  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.754008  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:44.754018  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:44.754078  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:44.786705  620795 cri.go:89] found id: ""
	I1213 12:08:44.786732  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.786741  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:44.786748  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:44.786806  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:44.822299  620795 cri.go:89] found id: ""
	I1213 12:08:44.822328  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.822337  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:44.822345  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:44.822401  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:44.856823  620795 cri.go:89] found id: ""
	I1213 12:08:44.856856  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.856867  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:44.856873  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:44.856930  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:44.882589  620795 cri.go:89] found id: ""
	I1213 12:08:44.882614  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.882623  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:44.882630  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:44.882688  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:44.908466  620795 cri.go:89] found id: ""
	I1213 12:08:44.908491  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.908500  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:44.908507  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:44.908588  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:44.937829  620795 cri.go:89] found id: ""
	I1213 12:08:44.937856  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.937865  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:44.937872  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:44.937927  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:44.963281  620795 cri.go:89] found id: ""
	I1213 12:08:44.963305  620795 logs.go:282] 0 containers: []
	W1213 12:08:44.963315  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:44.963324  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:44.963335  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:44.991410  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:44.991446  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:45.037106  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:45.037139  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:45.136316  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:45.136362  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:45.159600  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:45.159635  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:45.275736  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:45.264960   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.265716   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.268688   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.269926   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.271240   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:45.264960   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.265716   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.268688   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.269926   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:45.271240   12861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:47.775978  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:47.794424  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:47.794535  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:47.822730  620795 cri.go:89] found id: ""
	I1213 12:08:47.822773  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.822782  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:47.822794  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:47.822874  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:47.855882  620795 cri.go:89] found id: ""
	I1213 12:08:47.855909  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.855921  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:47.855928  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:47.855992  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:47.880824  620795 cri.go:89] found id: ""
	I1213 12:08:47.880849  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.880863  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:47.880870  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:47.880944  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:47.905536  620795 cri.go:89] found id: ""
	I1213 12:08:47.905558  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.905567  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:47.905573  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:47.905627  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:47.930629  620795 cri.go:89] found id: ""
	I1213 12:08:47.930651  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.930660  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:47.930666  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:47.930722  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:47.963310  620795 cri.go:89] found id: ""
	I1213 12:08:47.963340  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.963348  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:47.963355  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:47.963416  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:47.988259  620795 cri.go:89] found id: ""
	I1213 12:08:47.988284  620795 logs.go:282] 0 containers: []
	W1213 12:08:47.988293  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:47.988300  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:47.988363  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:48.016297  620795 cri.go:89] found id: ""
	I1213 12:08:48.016324  620795 logs.go:282] 0 containers: []
	W1213 12:08:48.016334  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:48.016344  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:48.016358  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:48.036992  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:48.037157  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:48.110165  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:48.102261   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.102875   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.104540   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.105094   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.106601   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:48.102261   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.102875   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.104540   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.105094   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:48.106601   12958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:48.110186  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:48.110199  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:48.138855  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:48.138892  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:48.167128  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:48.167162  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 12:08:49.537098  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:52.036223  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:50.735817  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:50.746548  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:50.746616  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:50.775549  620795 cri.go:89] found id: ""
	I1213 12:08:50.775575  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.775585  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:50.775591  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:50.775646  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:50.804612  620795 cri.go:89] found id: ""
	I1213 12:08:50.804635  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.804644  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:50.804650  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:50.804705  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:50.837625  620795 cri.go:89] found id: ""
	I1213 12:08:50.837650  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.837659  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:50.837665  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:50.837720  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:50.864589  620795 cri.go:89] found id: ""
	I1213 12:08:50.864612  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.864620  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:50.864627  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:50.864687  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:50.889551  620795 cri.go:89] found id: ""
	I1213 12:08:50.889575  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.889583  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:50.889589  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:50.889646  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:50.919224  620795 cri.go:89] found id: ""
	I1213 12:08:50.919247  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.919255  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:50.919261  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:50.919317  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:50.944422  620795 cri.go:89] found id: ""
	I1213 12:08:50.944495  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.944574  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:50.944612  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:50.944696  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:50.970021  620795 cri.go:89] found id: ""
	I1213 12:08:50.970086  620795 logs.go:282] 0 containers: []
	W1213 12:08:50.970109  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:50.970132  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:50.970163  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:50.986872  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:50.986906  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:51.060506  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:51.052011   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.052816   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.054613   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.055181   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.056812   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:51.052011   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.052816   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.054613   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.055181   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:51.056812   13070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:51.060540  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:51.060552  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:51.092480  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:51.092521  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:51.123102  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:51.123131  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:53.694152  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:53.705704  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:53.705773  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:53.731245  620795 cri.go:89] found id: ""
	I1213 12:08:53.731268  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.731276  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:53.731282  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:53.731340  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:53.757925  620795 cri.go:89] found id: ""
	I1213 12:08:53.757957  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.757966  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:53.757973  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:53.758036  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:53.808536  620795 cri.go:89] found id: ""
	I1213 12:08:53.808559  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.808568  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:53.808575  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:53.808635  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:53.840078  620795 cri.go:89] found id: ""
	I1213 12:08:53.840112  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.840122  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:53.840129  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:53.840189  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:53.865894  620795 cri.go:89] found id: ""
	I1213 12:08:53.865917  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.865927  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:53.865933  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:53.865993  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:53.891498  620795 cri.go:89] found id: ""
	I1213 12:08:53.891542  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.891551  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:53.891558  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:53.891621  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:53.917936  620795 cri.go:89] found id: ""
	I1213 12:08:53.917959  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.917968  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:53.917974  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:53.918032  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:53.943098  620795 cri.go:89] found id: ""
	I1213 12:08:53.943169  620795 logs.go:282] 0 containers: []
	W1213 12:08:53.943193  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:53.943215  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:53.943252  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:53.971597  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:53.971637  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:54.002508  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:54.002540  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:54.080813  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:54.080899  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:54.109629  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:54.109659  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:54.177694  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:54.170109   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.170817   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.172367   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.172694   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.174239   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:54.170109   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.170817   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.172367   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.172694   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:54.174239   13202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1213 12:08:54.036977  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:08:56.537074  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:08:56.677966  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:56.688667  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 12:08:56.688741  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 12:08:56.713668  620795 cri.go:89] found id: ""
	I1213 12:08:56.713690  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.713699  620795 logs.go:284] No container was found matching "kube-apiserver"
	I1213 12:08:56.713706  620795 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 12:08:56.713762  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 12:08:56.741202  620795 cri.go:89] found id: ""
	I1213 12:08:56.741227  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.741236  620795 logs.go:284] No container was found matching "etcd"
	I1213 12:08:56.741242  620795 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 12:08:56.741339  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 12:08:56.768922  620795 cri.go:89] found id: ""
	I1213 12:08:56.768942  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.768950  620795 logs.go:284] No container was found matching "coredns"
	I1213 12:08:56.768957  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 12:08:56.769013  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 12:08:56.797125  620795 cri.go:89] found id: ""
	I1213 12:08:56.797148  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.797157  620795 logs.go:284] No container was found matching "kube-scheduler"
	I1213 12:08:56.797164  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 12:08:56.797218  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 12:08:56.824672  620795 cri.go:89] found id: ""
	I1213 12:08:56.824695  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.824703  620795 logs.go:284] No container was found matching "kube-proxy"
	I1213 12:08:56.824709  620795 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 12:08:56.824763  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 12:08:56.849420  620795 cri.go:89] found id: ""
	I1213 12:08:56.849446  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.849455  620795 logs.go:284] No container was found matching "kube-controller-manager"
	I1213 12:08:56.849462  620795 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 12:08:56.849516  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 12:08:56.875118  620795 cri.go:89] found id: ""
	I1213 12:08:56.875143  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.875152  620795 logs.go:284] No container was found matching "kindnet"
	I1213 12:08:56.875158  620795 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1213 12:08:56.875213  620795 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1213 12:08:56.900386  620795 cri.go:89] found id: ""
	I1213 12:08:56.900411  620795 logs.go:282] 0 containers: []
	W1213 12:08:56.900420  620795 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1213 12:08:56.900434  620795 logs.go:123] Gathering logs for kubelet ...
	I1213 12:08:56.900446  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 12:08:56.966130  620795 logs.go:123] Gathering logs for dmesg ...
	I1213 12:08:56.966167  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 12:08:56.982745  620795 logs.go:123] Gathering logs for describe nodes ...
	I1213 12:08:56.982773  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 12:08:57.073125  620795 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:08:57.063683   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.064467   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.066129   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.066624   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.068003   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1213 12:08:57.063683   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.064467   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.066129   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.066624   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:08:57.068003   13298 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 12:08:57.073146  620795 logs.go:123] Gathering logs for CRI-O ...
	I1213 12:08:57.073165  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 12:08:57.104552  620795 logs.go:123] Gathering logs for container status ...
	I1213 12:08:57.104585  620795 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 12:08:59.636110  620795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:08:59.649509  620795 out.go:203] 
	W1213 12:08:59.652376  620795 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1213 12:08:59.652409  620795 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1213 12:08:59.652418  620795 out.go:285] * Related issues:
	W1213 12:08:59.652431  620795 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1213 12:08:59.652444  620795 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1213 12:08:59.655226  620795 out.go:203] 
	W1213 12:08:59.037102  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:09:01.536950  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:09:03.536998  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:09:06.036283  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:09:08.536173  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	W1213 12:09:10.536219  622913 node_ready.go:55] error getting node "no-preload-307409" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-307409": dial tcp 192.168.85.2:8443: connect: connection refused
	I1213 12:09:11.536756  622913 node_ready.go:38] duration metric: took 6m0.001029523s for node "no-preload-307409" to be "Ready" ...
	I1213 12:09:11.540138  622913 out.go:203] 
	W1213 12:09:11.543197  622913 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1213 12:09:11.543231  622913 out.go:285] * 
	W1213 12:09:11.545584  622913 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 12:09:11.548648  622913 out.go:203] 
	
	
	==> CRI-O <==
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.494217646Z" level=info msg="Using the internal default seccomp profile"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.494225302Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.494232317Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.49423788Z" level=info msg="RDT not available in the host system"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.49425041Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.495095045Z" level=info msg="Conmon does support the --sync option"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.495116264Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.495131451Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.495779293Z" level=info msg="Conmon does support the --sync option"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.49580189Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.495943824Z" level=info msg="Updated default CNI network name to "
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.496641734Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci
/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_m
emory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_di
r = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [cr
io.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.497083731Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.497162501Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560451228Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560494723Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.56056025Z" level=info msg="Create NRI interface"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560660304Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560668633Z" level=info msg="runtime interface created"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.56068309Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560689769Z" level=info msg="runtime interface starting up..."
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560695849Z" level=info msg="starting plugins..."
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560708797Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 12:02:55 newest-cni-800979 crio[616]: time="2025-12-13T12:02:55.560776564Z" level=info msg="No systemd watchdog enabled"
	Dec 13 12:02:55 newest-cni-800979 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:09:15.623858   13981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:09:15.624293   13981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:09:15.625556   13981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:09:15.625854   13981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:09:15.627299   13981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 11:22] overlayfs: idmapped layers are currently not supported
	[Dec13 11:23] overlayfs: idmapped layers are currently not supported
	[Dec13 11:24] overlayfs: idmapped layers are currently not supported
	[ +15.673058] overlayfs: idmapped layers are currently not supported
	[Dec13 11:25] overlayfs: idmapped layers are currently not supported
	[ +41.580408] overlayfs: idmapped layers are currently not supported
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	[Dec13 11:46] overlayfs: idmapped layers are currently not supported
	[ +24.639766] overlayfs: idmapped layers are currently not supported
	[ +18.732422] overlayfs: idmapped layers are currently not supported
	[Dec13 11:47] overlayfs: idmapped layers are currently not supported
	[Dec13 11:48] overlayfs: idmapped layers are currently not supported
	[Dec13 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.618483] overlayfs: idmapped layers are currently not supported
	[Dec13 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.749488] overlayfs: idmapped layers are currently not supported
	[Dec13 11:52] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 12:09:15 up  3:51,  0 user,  load average: 1.08, 0.86, 1.24
	Linux newest-cni-800979 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 12:09:12 newest-cni-800979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:09:13 newest-cni-800979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 500.
	Dec 13 12:09:13 newest-cni-800979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:13 newest-cni-800979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:13 newest-cni-800979 kubelet[13881]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:13 newest-cni-800979 kubelet[13881]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:13 newest-cni-800979 kubelet[13881]: E1213 12:09:13.600400   13881 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:09:13 newest-cni-800979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:09:13 newest-cni-800979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:09:14 newest-cni-800979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 501.
	Dec 13 12:09:14 newest-cni-800979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:14 newest-cni-800979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:14 newest-cni-800979 kubelet[13886]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:14 newest-cni-800979 kubelet[13886]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:14 newest-cni-800979 kubelet[13886]: E1213 12:09:14.339266   13886 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:09:14 newest-cni-800979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:09:14 newest-cni-800979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:09:15 newest-cni-800979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 502.
	Dec 13 12:09:15 newest-cni-800979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:15 newest-cni-800979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:09:15 newest-cni-800979 kubelet[13894]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:15 newest-cni-800979 kubelet[13894]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:09:15 newest-cni-800979 kubelet[13894]: E1213 12:09:15.104506   13894 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:09:15 newest-cni-800979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:09:15 newest-cni-800979 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-800979 -n newest-cni-800979
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-800979 -n newest-cni-800979: exit status 2 (326.367884ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-800979" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (13.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:09:28.638769  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1213 12:10:14.419197  356328 config.go:182] Loaded profile config "auto-062409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:10:44.682523  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:12:00.472721  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:12:07.748046  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:12:11.007723  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:12:27.930786  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:13:05.574891  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:13:23.541637  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:14:06.640333  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:15:14.657576  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/auto-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:15:14.663966  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/auto-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:15:14.675403  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/auto-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:15:14.697051  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/auto-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:15:14.739257  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/auto-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:15:14.823261  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/auto-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:15:14.985088  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/auto-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:15:15.306819  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/auto-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:15:15.948776  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/auto-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:15:17.230939  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/auto-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:15:19.792608  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/auto-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:15:35.156427  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/auto-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:15:44.682175  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:15:55.638364  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/auto-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:16:36.600575  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/auto-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:16:37.413523  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kindnet-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:16:37.420235  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kindnet-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:16:37.431612  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kindnet-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:16:37.453092  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kindnet-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:16:37.494468  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kindnet-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:16:37.575975  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kindnet-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:16:37.737408  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kindnet-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:16:38.059794  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kindnet-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:16:38.701460  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kindnet-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:16:39.982990  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kindnet-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:16:42.544584  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kindnet-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:16:47.667175  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kindnet-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:16:57.911023  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kindnet-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:17:00.470282  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:17:18.392300  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kindnet-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:17:27.931223  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:17:58.522735  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/auto-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:17:59.353739  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kindnet-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:18:05.574926  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/old-k8s-version-051699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-307409 -n no-preload-307409
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-307409 -n no-preload-307409: exit status 2 (524.176957ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-307409" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-307409
helpers_test.go:244: (dbg) docker inspect no-preload-307409:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a",
	        "Created": "2025-12-13T11:52:23.357834479Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 623056,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T12:03:03.340968033Z",
	            "FinishedAt": "2025-12-13T12:03:01.976500099Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/hosts",
	        "LogPath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a-json.log",
	        "Name": "/no-preload-307409",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-307409:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-307409",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a",
	                "LowerDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-307409",
	                "Source": "/var/lib/docker/volumes/no-preload-307409/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-307409",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-307409",
	                "name.minikube.sigs.k8s.io": "no-preload-307409",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c126f047073986da1996efceb8a3e932bcfa233495a4aa62f7ff0993488c461e",
	            "SandboxKey": "/var/run/docker/netns/c126f0470739",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-307409": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:b6:08:7b:b6:bb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "280e424abad6162e6fbaaf316b3c6095ab0d80a59a1f82eb556a84b2dd4f139a",
	                    "EndpointID": "012a611abbc58ce4e9989db1baedc5a39d41b5ffd347c4e9d8cd59dee05ce5c5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-307409",
	                        "9fe6186bf0c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-307409 -n no-preload-307409
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-307409 -n no-preload-307409: exit status 2 (481.847178ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-307409 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-307409 logs -n 25: (1.02886615s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p enable-default-cni-062409 sudo containerd config dump                                                                                 │ enable-default-cni-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:16 UTC │ 13 Dec 25 12:16 UTC │
	│ ssh     │ -p enable-default-cni-062409 sudo systemctl status crio --all --full --no-pager                                                          │ enable-default-cni-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:16 UTC │ 13 Dec 25 12:16 UTC │
	│ ssh     │ -p enable-default-cni-062409 sudo systemctl cat crio --no-pager                                                                          │ enable-default-cni-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:16 UTC │ 13 Dec 25 12:16 UTC │
	│ ssh     │ -p enable-default-cni-062409 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                │ enable-default-cni-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:16 UTC │ 13 Dec 25 12:16 UTC │
	│ ssh     │ -p enable-default-cni-062409 sudo crio config                                                                                            │ enable-default-cni-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:16 UTC │ 13 Dec 25 12:16 UTC │
	│ delete  │ -p enable-default-cni-062409                                                                                                             │ enable-default-cni-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:16 UTC │ 13 Dec 25 12:16 UTC │
	│ start   │ -p flannel-062409 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio │ flannel-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:16 UTC │ 13 Dec 25 12:17 UTC │
	│ ssh     │ -p flannel-062409 pgrep -a kubelet                                                                                                       │ flannel-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:17 UTC │ 13 Dec 25 12:17 UTC │
	│ ssh     │ -p flannel-062409 sudo cat /etc/nsswitch.conf                                                                                            │ flannel-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:18 UTC │ 13 Dec 25 12:18 UTC │
	│ ssh     │ -p flannel-062409 sudo cat /etc/hosts                                                                                                    │ flannel-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:18 UTC │ 13 Dec 25 12:18 UTC │
	│ ssh     │ -p flannel-062409 sudo cat /etc/resolv.conf                                                                                              │ flannel-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:18 UTC │ 13 Dec 25 12:18 UTC │
	│ ssh     │ -p flannel-062409 sudo crictl pods                                                                                                       │ flannel-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:18 UTC │ 13 Dec 25 12:18 UTC │
	│ ssh     │ -p flannel-062409 sudo crictl ps --all                                                                                                   │ flannel-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:18 UTC │ 13 Dec 25 12:18 UTC │
	│ ssh     │ -p flannel-062409 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ flannel-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:18 UTC │ 13 Dec 25 12:18 UTC │
	│ ssh     │ -p flannel-062409 sudo ip a s                                                                                                            │ flannel-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:18 UTC │ 13 Dec 25 12:18 UTC │
	│ ssh     │ -p flannel-062409 sudo ip r s                                                                                                            │ flannel-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:18 UTC │ 13 Dec 25 12:18 UTC │
	│ ssh     │ -p flannel-062409 sudo iptables-save                                                                                                     │ flannel-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:18 UTC │ 13 Dec 25 12:18 UTC │
	│ ssh     │ -p flannel-062409 sudo iptables -t nat -L -n -v                                                                                          │ flannel-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:18 UTC │ 13 Dec 25 12:18 UTC │
	│ ssh     │ -p flannel-062409 sudo cat /run/flannel/subnet.env                                                                                       │ flannel-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:18 UTC │ 13 Dec 25 12:18 UTC │
	│ ssh     │ -p flannel-062409 sudo cat /etc/kube-flannel/cni-conf.json                                                                               │ flannel-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:18 UTC │                     │
	│ ssh     │ -p flannel-062409 sudo systemctl status kubelet --all --full --no-pager                                                                  │ flannel-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:18 UTC │ 13 Dec 25 12:18 UTC │
	│ ssh     │ -p flannel-062409 sudo systemctl cat kubelet --no-pager                                                                                  │ flannel-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:18 UTC │ 13 Dec 25 12:18 UTC │
	│ ssh     │ -p flannel-062409 sudo journalctl -xeu kubelet --all --full --no-pager                                                                   │ flannel-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:18 UTC │ 13 Dec 25 12:18 UTC │
	│ ssh     │ -p flannel-062409 sudo cat /etc/kubernetes/kubelet.conf                                                                                  │ flannel-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:18 UTC │ 13 Dec 25 12:18 UTC │
	│ ssh     │ -p flannel-062409 sudo cat /var/lib/kubelet/config.yaml                                                                                  │ flannel-062409            │ jenkins │ v1.37.0 │ 13 Dec 25 12:18 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 12:16:51
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 12:16:51.948248  673374 out.go:360] Setting OutFile to fd 1 ...
	I1213 12:16:51.948379  673374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:16:51.948390  673374 out.go:374] Setting ErrFile to fd 2...
	I1213 12:16:51.948396  673374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:16:51.948640  673374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 12:16:51.949043  673374 out.go:368] Setting JSON to false
	I1213 12:16:51.949949  673374 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14364,"bootTime":1765613848,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 12:16:51.950018  673374 start.go:143] virtualization:  
	I1213 12:16:51.954557  673374 out.go:179] * [flannel-062409] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 12:16:51.959330  673374 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 12:16:51.959395  673374 notify.go:221] Checking for updates...
	I1213 12:16:51.966204  673374 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 12:16:51.969409  673374 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:16:51.972557  673374 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 12:16:51.975761  673374 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 12:16:51.978843  673374 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 12:16:51.982308  673374 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:16:51.982414  673374 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 12:16:52.016859  673374 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 12:16:52.017008  673374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:16:52.075451  673374 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 12:16:52.06504011 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:16:52.075590  673374 docker.go:319] overlay module found
	I1213 12:16:52.079017  673374 out.go:179] * Using the docker driver based on user configuration
	I1213 12:16:52.082014  673374 start.go:309] selected driver: docker
	I1213 12:16:52.082034  673374 start.go:927] validating driver "docker" against <nil>
	I1213 12:16:52.082050  673374 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 12:16:52.082780  673374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:16:52.145721  673374 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 12:16:52.135956589 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:16:52.145885  673374 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 12:16:52.146141  673374 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 12:16:52.149093  673374 out.go:179] * Using Docker driver with root privileges
	I1213 12:16:52.151940  673374 cni.go:84] Creating CNI manager for "flannel"
	I1213 12:16:52.151966  673374 start_flags.go:336] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1213 12:16:52.152054  673374 start.go:353] cluster config:
	{Name:flannel-062409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-062409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:16:52.155319  673374 out.go:179] * Starting "flannel-062409" primary control-plane node in "flannel-062409" cluster
	I1213 12:16:52.158180  673374 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 12:16:52.161057  673374 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 12:16:52.163992  673374 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 12:16:52.164066  673374 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 12:16:52.164069  673374 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 12:16:52.164078  673374 cache.go:65] Caching tarball of preloaded images
	I1213 12:16:52.164182  673374 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 12:16:52.164193  673374 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 12:16:52.164309  673374 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/config.json ...
	I1213 12:16:52.164334  673374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/config.json: {Name:mke0aaac7f82d3ba4598738793e0f010ae883570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:16:52.183453  673374 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 12:16:52.183474  673374 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 12:16:52.183494  673374 cache.go:243] Successfully downloaded all kic artifacts
	I1213 12:16:52.183552  673374 start.go:360] acquireMachinesLock for flannel-062409: {Name:mk94ac96fedb977ea3113d69f12ad284b962587e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:16:52.183655  673374 start.go:364] duration metric: took 80.338µs to acquireMachinesLock for "flannel-062409"
	I1213 12:16:52.183685  673374 start.go:93] Provisioning new machine with config: &{Name:flannel-062409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-062409 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 12:16:52.183755  673374 start.go:125] createHost starting for "" (driver="docker")
	I1213 12:16:52.187221  673374 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 12:16:52.187546  673374 start.go:159] libmachine.API.Create for "flannel-062409" (driver="docker")
	I1213 12:16:52.187590  673374 client.go:173] LocalClient.Create starting
	I1213 12:16:52.187662  673374 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem
	I1213 12:16:52.187699  673374 main.go:143] libmachine: Decoding PEM data...
	I1213 12:16:52.187723  673374 main.go:143] libmachine: Parsing certificate...
	I1213 12:16:52.187783  673374 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem
	I1213 12:16:52.187808  673374 main.go:143] libmachine: Decoding PEM data...
	I1213 12:16:52.187828  673374 main.go:143] libmachine: Parsing certificate...
	I1213 12:16:52.188234  673374 cli_runner.go:164] Run: docker network inspect flannel-062409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 12:16:52.203666  673374 cli_runner.go:211] docker network inspect flannel-062409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 12:16:52.203773  673374 network_create.go:284] running [docker network inspect flannel-062409] to gather additional debugging logs...
	I1213 12:16:52.203804  673374 cli_runner.go:164] Run: docker network inspect flannel-062409
	W1213 12:16:52.222197  673374 cli_runner.go:211] docker network inspect flannel-062409 returned with exit code 1
	I1213 12:16:52.222235  673374 network_create.go:287] error running [docker network inspect flannel-062409]: docker network inspect flannel-062409: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network flannel-062409 not found
	I1213 12:16:52.222267  673374 network_create.go:289] output of [docker network inspect flannel-062409]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network flannel-062409 not found
	
	** /stderr **
	I1213 12:16:52.222396  673374 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 12:16:52.240038  673374 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0545902499c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:32:4c:cb:8d:7b} reservation:<nil>}
	I1213 12:16:52.240468  673374 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-de5fe2fbe3b8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:54:47:7f:e7:3a} reservation:<nil>}
	I1213 12:16:52.240725  673374 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7c96683190e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:0a:60:46:c5:4a} reservation:<nil>}
	I1213 12:16:52.241178  673374 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c51e0}
	I1213 12:16:52.241203  673374 network_create.go:124] attempt to create docker network flannel-062409 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 12:16:52.241259  673374 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-062409 flannel-062409
	I1213 12:16:52.309414  673374 network_create.go:108] docker network flannel-062409 192.168.76.0/24 created
	I1213 12:16:52.309458  673374 kic.go:121] calculated static IP "192.168.76.2" for the "flannel-062409" container
	I1213 12:16:52.309558  673374 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 12:16:52.329342  673374 cli_runner.go:164] Run: docker volume create flannel-062409 --label name.minikube.sigs.k8s.io=flannel-062409 --label created_by.minikube.sigs.k8s.io=true
	I1213 12:16:52.348119  673374 oci.go:103] Successfully created a docker volume flannel-062409
	I1213 12:16:52.348197  673374 cli_runner.go:164] Run: docker run --rm --name flannel-062409-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-062409 --entrypoint /usr/bin/test -v flannel-062409:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 12:16:52.865837  673374 oci.go:107] Successfully prepared a docker volume flannel-062409
	I1213 12:16:52.865913  673374 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 12:16:52.865924  673374 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 12:16:52.865999  673374 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v flannel-062409:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 12:16:56.954126  673374 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v flannel-062409:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.088081319s)
	I1213 12:16:56.954157  673374 kic.go:203] duration metric: took 4.088229523s to extract preloaded images to volume ...
	W1213 12:16:56.954412  673374 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 12:16:56.954559  673374 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 12:16:57.002284  673374 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname flannel-062409 --name flannel-062409 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-062409 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=flannel-062409 --network flannel-062409 --ip 192.168.76.2 --volume flannel-062409:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 12:16:57.317958  673374 cli_runner.go:164] Run: docker container inspect flannel-062409 --format={{.State.Running}}
	I1213 12:16:57.342807  673374 cli_runner.go:164] Run: docker container inspect flannel-062409 --format={{.State.Status}}
	I1213 12:16:57.368932  673374 cli_runner.go:164] Run: docker exec flannel-062409 stat /var/lib/dpkg/alternatives/iptables
	I1213 12:16:57.427887  673374 oci.go:144] the created container "flannel-062409" has a running status.
	I1213 12:16:57.427916  673374 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/flannel-062409/id_rsa...
	I1213 12:16:57.929596  673374 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-354468/.minikube/machines/flannel-062409/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 12:16:57.949545  673374 cli_runner.go:164] Run: docker container inspect flannel-062409 --format={{.State.Status}}
	I1213 12:16:57.966950  673374 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 12:16:57.966977  673374 kic_runner.go:114] Args: [docker exec --privileged flannel-062409 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 12:16:58.008991  673374 cli_runner.go:164] Run: docker container inspect flannel-062409 --format={{.State.Status}}
	I1213 12:16:58.032553  673374 machine.go:94] provisionDockerMachine start ...
	I1213 12:16:58.032665  673374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-062409
	I1213 12:16:58.051605  673374 main.go:143] libmachine: Using SSH client type: native
	I1213 12:16:58.052016  673374 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I1213 12:16:58.052034  673374 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 12:16:58.052764  673374 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53320->127.0.0.1:33504: read: connection reset by peer
	I1213 12:17:01.212253  673374 main.go:143] libmachine: SSH cmd err, output: <nil>: flannel-062409
	
	I1213 12:17:01.212280  673374 ubuntu.go:182] provisioning hostname "flannel-062409"
	I1213 12:17:01.212358  673374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-062409
	I1213 12:17:01.234615  673374 main.go:143] libmachine: Using SSH client type: native
	I1213 12:17:01.234945  673374 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I1213 12:17:01.234961  673374 main.go:143] libmachine: About to run SSH command:
	sudo hostname flannel-062409 && echo "flannel-062409" | sudo tee /etc/hostname
	I1213 12:17:01.439586  673374 main.go:143] libmachine: SSH cmd err, output: <nil>: flannel-062409
	
	I1213 12:17:01.439682  673374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-062409
	I1213 12:17:01.460304  673374 main.go:143] libmachine: Using SSH client type: native
	I1213 12:17:01.460710  673374 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I1213 12:17:01.460748  673374 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-062409' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-062409/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-062409' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 12:17:01.616691  673374 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 12:17:01.616724  673374 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 12:17:01.616745  673374 ubuntu.go:190] setting up certificates
	I1213 12:17:01.616773  673374 provision.go:84] configureAuth start
	I1213 12:17:01.616849  673374 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-062409
	I1213 12:17:01.636769  673374 provision.go:143] copyHostCerts
	I1213 12:17:01.636851  673374 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 12:17:01.636865  673374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 12:17:01.636959  673374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 12:17:01.637075  673374 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 12:17:01.637088  673374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 12:17:01.637119  673374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 12:17:01.637182  673374 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 12:17:01.637192  673374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 12:17:01.637218  673374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 12:17:01.637274  673374 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.flannel-062409 san=[127.0.0.1 192.168.76.2 flannel-062409 localhost minikube]
	I1213 12:17:01.741896  673374 provision.go:177] copyRemoteCerts
	I1213 12:17:01.741980  673374 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 12:17:01.742026  673374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-062409
	I1213 12:17:01.761253  673374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/flannel-062409/id_rsa Username:docker}
	I1213 12:17:01.868741  673374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 12:17:01.895777  673374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1213 12:17:01.916422  673374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 12:17:01.937166  673374 provision.go:87] duration metric: took 320.373129ms to configureAuth
	I1213 12:17:01.937243  673374 ubuntu.go:206] setting minikube options for container-runtime
	I1213 12:17:01.937488  673374 config.go:182] Loaded profile config "flannel-062409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 12:17:01.937614  673374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-062409
	I1213 12:17:01.957711  673374 main.go:143] libmachine: Using SSH client type: native
	I1213 12:17:01.958059  673374 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I1213 12:17:01.958083  673374 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 12:17:02.313386  673374 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 12:17:02.313410  673374 machine.go:97] duration metric: took 4.280833965s to provisionDockerMachine
	I1213 12:17:02.313421  673374 client.go:176] duration metric: took 10.12582022s to LocalClient.Create
	I1213 12:17:02.313438  673374 start.go:167] duration metric: took 10.125894766s to libmachine.API.Create "flannel-062409"
	I1213 12:17:02.313446  673374 start.go:293] postStartSetup for "flannel-062409" (driver="docker")
	I1213 12:17:02.313459  673374 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 12:17:02.313538  673374 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 12:17:02.313587  673374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-062409
	I1213 12:17:02.333352  673374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/flannel-062409/id_rsa Username:docker}
	I1213 12:17:02.444663  673374 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 12:17:02.448782  673374 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 12:17:02.448815  673374 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 12:17:02.448829  673374 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 12:17:02.448898  673374 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 12:17:02.449011  673374 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 12:17:02.449122  673374 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 12:17:02.458023  673374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 12:17:02.479085  673374 start.go:296] duration metric: took 165.623794ms for postStartSetup
	I1213 12:17:02.479592  673374 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-062409
	I1213 12:17:02.498077  673374 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/config.json ...
	I1213 12:17:02.498386  673374 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 12:17:02.498442  673374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-062409
	I1213 12:17:02.517814  673374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/flannel-062409/id_rsa Username:docker}
	I1213 12:17:02.625338  673374 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 12:17:02.631360  673374 start.go:128] duration metric: took 10.447588605s to createHost
	I1213 12:17:02.631390  673374 start.go:83] releasing machines lock for "flannel-062409", held for 10.447720054s
	I1213 12:17:02.631496  673374 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-062409
	I1213 12:17:02.652887  673374 ssh_runner.go:195] Run: cat /version.json
	I1213 12:17:02.652935  673374 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 12:17:02.652954  673374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-062409
	I1213 12:17:02.653017  673374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-062409
	I1213 12:17:02.682467  673374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/flannel-062409/id_rsa Username:docker}
	I1213 12:17:02.683163  673374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/flannel-062409/id_rsa Username:docker}
	I1213 12:17:02.788854  673374 ssh_runner.go:195] Run: systemctl --version
	I1213 12:17:02.888771  673374 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 12:17:02.932117  673374 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 12:17:02.937298  673374 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 12:17:02.937459  673374 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 12:17:02.970443  673374 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 12:17:02.970469  673374 start.go:496] detecting cgroup driver to use...
	I1213 12:17:02.970504  673374 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 12:17:02.970568  673374 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 12:17:02.990904  673374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 12:17:03.008575  673374 docker.go:218] disabling cri-docker service (if available) ...
	I1213 12:17:03.008732  673374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 12:17:03.030598  673374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 12:17:03.055361  673374 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 12:17:03.192614  673374 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 12:17:03.324477  673374 docker.go:234] disabling docker service ...
	I1213 12:17:03.324615  673374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 12:17:03.349519  673374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 12:17:03.365271  673374 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 12:17:03.493967  673374 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 12:17:03.653410  673374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 12:17:03.668723  673374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 12:17:03.685817  673374 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 12:17:03.685892  673374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:17:03.696898  673374 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 12:17:03.697005  673374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:17:03.707665  673374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:17:03.718108  673374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:17:03.728812  673374 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 12:17:03.738505  673374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:17:03.749233  673374 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:17:03.765357  673374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:17:03.776122  673374 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 12:17:03.785101  673374 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 12:17:03.794087  673374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:17:03.924223  673374 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 12:17:04.114313  673374 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 12:17:04.114403  673374 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 12:17:04.120812  673374 start.go:564] Will wait 60s for crictl version
	I1213 12:17:04.120900  673374 ssh_runner.go:195] Run: which crictl
	I1213 12:17:04.125732  673374 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 12:17:04.156980  673374 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 12:17:04.157107  673374 ssh_runner.go:195] Run: crio --version
	I1213 12:17:04.189066  673374 ssh_runner.go:195] Run: crio --version
	I1213 12:17:04.223304  673374 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 12:17:04.226305  673374 cli_runner.go:164] Run: docker network inspect flannel-062409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 12:17:04.245193  673374 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 12:17:04.250429  673374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:17:04.262233  673374 kubeadm.go:884] updating cluster {Name:flannel-062409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-062409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 12:17:04.262378  673374 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 12:17:04.262442  673374 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 12:17:04.302996  673374 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 12:17:04.303021  673374 crio.go:433] Images already preloaded, skipping extraction
	I1213 12:17:04.303110  673374 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 12:17:04.343564  673374 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 12:17:04.343592  673374 cache_images.go:86] Images are preloaded, skipping loading
	I1213 12:17:04.343600  673374 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1213 12:17:04.343709  673374 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=flannel-062409 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:flannel-062409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I1213 12:17:04.343803  673374 ssh_runner.go:195] Run: crio config
	I1213 12:17:04.426772  673374 cni.go:84] Creating CNI manager for "flannel"
	I1213 12:17:04.426811  673374 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 12:17:04.426840  673374 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-062409 NodeName:flannel-062409 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 12:17:04.426995  673374 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-062409"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 12:17:04.427079  673374 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 12:17:04.437396  673374 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 12:17:04.437487  673374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 12:17:04.446594  673374 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1213 12:17:04.462843  673374 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 12:17:04.479361  673374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1213 12:17:04.495261  673374 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 12:17:04.499691  673374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:17:04.511336  673374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:17:04.643428  673374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:17:04.662882  673374 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409 for IP: 192.168.76.2
	I1213 12:17:04.662965  673374 certs.go:195] generating shared ca certs ...
	I1213 12:17:04.662997  673374 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:17:04.663263  673374 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 12:17:04.663344  673374 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 12:17:04.663379  673374 certs.go:257] generating profile certs ...
	I1213 12:17:04.663482  673374 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/client.key
	I1213 12:17:04.663538  673374 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/client.crt with IP's: []
	I1213 12:17:04.985292  673374 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/client.crt ...
	I1213 12:17:04.985341  673374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/client.crt: {Name:mk0a7991c3f7b481166a5448361415cd09b0d035 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:17:04.985575  673374 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/client.key ...
	I1213 12:17:04.985597  673374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/client.key: {Name:mked3a690461b1e54d72fc3a800161128fd712a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:17:04.985707  673374 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/apiserver.key.d734d692
	I1213 12:17:04.985726  673374 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/apiserver.crt.d734d692 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 12:17:05.522941  673374 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/apiserver.crt.d734d692 ...
	I1213 12:17:05.522978  673374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/apiserver.crt.d734d692: {Name:mke5edf04f992ef037e1ab221946d72b75780359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:17:05.523214  673374 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/apiserver.key.d734d692 ...
	I1213 12:17:05.523231  673374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/apiserver.key.d734d692: {Name:mkf9ce484ce083f1cc2158d9df35fa99be467b9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:17:05.523333  673374 certs.go:382] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/apiserver.crt.d734d692 -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/apiserver.crt
	I1213 12:17:05.523422  673374 certs.go:386] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/apiserver.key.d734d692 -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/apiserver.key
	I1213 12:17:05.523482  673374 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/proxy-client.key
	I1213 12:17:05.523495  673374 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/proxy-client.crt with IP's: []
	I1213 12:17:05.871603  673374 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/proxy-client.crt ...
	I1213 12:17:05.871637  673374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/proxy-client.crt: {Name:mk30802617a684f0097912b905507e9cb4bb8d6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:17:05.871832  673374 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/proxy-client.key ...
	I1213 12:17:05.871848  673374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/proxy-client.key: {Name:mk8e1b82d1f0d1b1ddbb786a3cbeed17068de336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:17:05.872063  673374 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 12:17:05.872112  673374 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 12:17:05.872128  673374 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 12:17:05.872157  673374 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 12:17:05.872188  673374 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 12:17:05.872216  673374 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 12:17:05.872277  673374 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 12:17:05.872892  673374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 12:17:05.893765  673374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 12:17:05.914209  673374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 12:17:05.936316  673374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 12:17:05.956841  673374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 12:17:05.977898  673374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 12:17:06.002035  673374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 12:17:06.030926  673374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/flannel-062409/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 12:17:06.057286  673374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 12:17:06.079492  673374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 12:17:06.102090  673374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 12:17:06.125321  673374 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 12:17:06.140742  673374 ssh_runner.go:195] Run: openssl version
	I1213 12:17:06.147816  673374 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:17:06.156592  673374 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 12:17:06.165319  673374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:17:06.169955  673374 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:17:06.170029  673374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:17:06.213237  673374 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 12:17:06.221665  673374 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 12:17:06.230122  673374 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 12:17:06.239104  673374 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 12:17:06.248334  673374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 12:17:06.253547  673374 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 12:17:06.253643  673374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 12:17:06.298002  673374 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 12:17:06.307538  673374 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/356328.pem /etc/ssl/certs/51391683.0
	I1213 12:17:06.317310  673374 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 12:17:06.327149  673374 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 12:17:06.336957  673374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 12:17:06.341471  673374 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 12:17:06.341542  673374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 12:17:06.389103  673374 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 12:17:06.397645  673374 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3563282.pem /etc/ssl/certs/3ec20f2e.0
	I1213 12:17:06.406119  673374 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 12:17:06.410377  673374 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 12:17:06.410437  673374 kubeadm.go:401] StartCluster: {Name:flannel-062409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-062409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:17:06.410515  673374 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 12:17:06.410581  673374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 12:17:06.442343  673374 cri.go:89] found id: ""
	I1213 12:17:06.442485  673374 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 12:17:06.451354  673374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 12:17:06.460328  673374 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 12:17:06.460488  673374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 12:17:06.469320  673374 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 12:17:06.469351  673374 kubeadm.go:158] found existing configuration files:
	
	I1213 12:17:06.469423  673374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 12:17:06.478517  673374 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 12:17:06.478630  673374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 12:17:06.487579  673374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 12:17:06.496916  673374 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 12:17:06.497042  673374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 12:17:06.505971  673374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 12:17:06.515465  673374 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 12:17:06.515631  673374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 12:17:06.532443  673374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 12:17:06.546772  673374 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 12:17:06.546946  673374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 12:17:06.556871  673374 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 12:17:06.611362  673374 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 12:17:06.611490  673374 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 12:17:06.641442  673374 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 12:17:06.641546  673374 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 12:17:06.641617  673374 kubeadm.go:319] OS: Linux
	I1213 12:17:06.641678  673374 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 12:17:06.641793  673374 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 12:17:06.641886  673374 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 12:17:06.641995  673374 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 12:17:06.642090  673374 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 12:17:06.642181  673374 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 12:17:06.642258  673374 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 12:17:06.642343  673374 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 12:17:06.642430  673374 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 12:17:06.722725  673374 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 12:17:06.722839  673374 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 12:17:06.722941  673374 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 12:17:06.731784  673374 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 12:17:06.738312  673374 out.go:252]   - Generating certificates and keys ...
	I1213 12:17:06.738486  673374 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 12:17:06.738589  673374 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 12:17:06.856625  673374 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 12:17:07.129170  673374 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 12:17:07.270483  673374 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 12:17:07.876093  673374 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 12:17:08.006864  673374 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 12:17:08.007238  673374 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [flannel-062409 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 12:17:08.656140  673374 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 12:17:08.656525  673374 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [flannel-062409 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 12:17:09.697971  673374 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 12:17:10.874931  673374 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 12:17:10.979177  673374 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 12:17:10.979483  673374 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 12:17:11.174369  673374 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 12:17:11.393120  673374 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 12:17:12.345136  673374 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 12:17:13.013709  673374 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 12:17:13.534179  673374 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 12:17:13.535046  673374 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 12:17:13.537730  673374 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 12:17:13.541361  673374 out.go:252]   - Booting up control plane ...
	I1213 12:17:13.541481  673374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 12:17:13.541571  673374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 12:17:13.541660  673374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 12:17:13.559474  673374 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 12:17:13.559657  673374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 12:17:13.568406  673374 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 12:17:13.568737  673374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 12:17:13.568983  673374 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 12:17:13.695789  673374 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 12:17:13.695916  673374 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 12:17:14.696646  673374 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000912835s
	I1213 12:17:14.700179  673374 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 12:17:14.700283  673374 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1213 12:17:14.700575  673374 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 12:17:14.700715  673374 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 12:17:17.447476  673374 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.74692901s
	I1213 12:17:18.471813  673374 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.771573114s
	I1213 12:17:20.201719  673374 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501439492s
	I1213 12:17:20.234485  673374 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 12:17:20.252039  673374 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 12:17:20.270415  673374 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 12:17:20.270899  673374 kubeadm.go:319] [mark-control-plane] Marking the node flannel-062409 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 12:17:20.283492  673374 kubeadm.go:319] [bootstrap-token] Using token: krdai5.plwoee8423t8hxqb
	I1213 12:17:20.286403  673374 out.go:252]   - Configuring RBAC rules ...
	I1213 12:17:20.286601  673374 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 12:17:20.291101  673374 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 12:17:20.298700  673374 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 12:17:20.302451  673374 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 12:17:20.305841  673374 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 12:17:20.311735  673374 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 12:17:20.610284  673374 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 12:17:21.067288  673374 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 12:17:21.608913  673374 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 12:17:21.610175  673374 kubeadm.go:319] 
	I1213 12:17:21.610253  673374 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 12:17:21.610263  673374 kubeadm.go:319] 
	I1213 12:17:21.610340  673374 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 12:17:21.610350  673374 kubeadm.go:319] 
	I1213 12:17:21.610376  673374 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 12:17:21.610439  673374 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 12:17:21.610494  673374 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 12:17:21.610501  673374 kubeadm.go:319] 
	I1213 12:17:21.610574  673374 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 12:17:21.610583  673374 kubeadm.go:319] 
	I1213 12:17:21.610654  673374 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 12:17:21.610666  673374 kubeadm.go:319] 
	I1213 12:17:21.610718  673374 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 12:17:21.610802  673374 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 12:17:21.610875  673374 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 12:17:21.610883  673374 kubeadm.go:319] 
	I1213 12:17:21.610994  673374 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 12:17:21.611085  673374 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 12:17:21.611090  673374 kubeadm.go:319] 
	I1213 12:17:21.611174  673374 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token krdai5.plwoee8423t8hxqb \
	I1213 12:17:21.611278  673374 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a3798e8f4868c7e4585b4327b4f0565e5125112465fbf26ae2f7c9b7fec5e169 \
	I1213 12:17:21.611298  673374 kubeadm.go:319] 	--control-plane 
	I1213 12:17:21.611302  673374 kubeadm.go:319] 
	I1213 12:17:21.611387  673374 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 12:17:21.611394  673374 kubeadm.go:319] 
	I1213 12:17:21.611477  673374 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token krdai5.plwoee8423t8hxqb \
	I1213 12:17:21.611600  673374 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a3798e8f4868c7e4585b4327b4f0565e5125112465fbf26ae2f7c9b7fec5e169 
	I1213 12:17:21.615651  673374 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 12:17:21.615876  673374 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 12:17:21.615979  673374 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 12:17:21.615995  673374 cni.go:84] Creating CNI manager for "flannel"
	I1213 12:17:21.619104  673374 out.go:179] * Configuring Flannel (Container Networking Interface) ...
	I1213 12:17:21.621896  673374 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 12:17:21.626340  673374 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 12:17:21.626357  673374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4415 bytes)
	I1213 12:17:21.641207  673374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 12:17:22.149679  673374 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 12:17:22.149802  673374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:17:22.149870  673374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-062409 minikube.k8s.io/updated_at=2025_12_13T12_17_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b minikube.k8s.io/name=flannel-062409 minikube.k8s.io/primary=true
	I1213 12:17:22.377592  673374 ops.go:34] apiserver oom_adj: -16
	I1213 12:17:22.377718  673374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:17:22.878298  673374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:17:23.378143  673374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:17:23.878431  673374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:17:24.377924  673374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:17:24.878417  673374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:17:25.378459  673374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:17:25.877953  673374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:17:26.377865  673374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:17:26.491448  673374 kubeadm.go:1114] duration metric: took 4.341679631s to wait for elevateKubeSystemPrivileges
	I1213 12:17:26.491487  673374 kubeadm.go:403] duration metric: took 20.081053891s to StartCluster
	I1213 12:17:26.491528  673374 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:17:26.491616  673374 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:17:26.492965  673374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:17:26.493245  673374 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 12:17:26.493339  673374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 12:17:26.493614  673374 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 12:17:26.493700  673374 addons.go:70] Setting storage-provisioner=true in profile "flannel-062409"
	I1213 12:17:26.493715  673374 addons.go:239] Setting addon storage-provisioner=true in "flannel-062409"
	I1213 12:17:26.493743  673374 host.go:66] Checking if "flannel-062409" exists ...
	I1213 12:17:26.493666  673374 config.go:182] Loaded profile config "flannel-062409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 12:17:26.493956  673374 addons.go:70] Setting default-storageclass=true in profile "flannel-062409"
	I1213 12:17:26.493970  673374 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "flannel-062409"
	I1213 12:17:26.494227  673374 cli_runner.go:164] Run: docker container inspect flannel-062409 --format={{.State.Status}}
	I1213 12:17:26.494381  673374 cli_runner.go:164] Run: docker container inspect flannel-062409 --format={{.State.Status}}
	I1213 12:17:26.496544  673374 out.go:179] * Verifying Kubernetes components...
	I1213 12:17:26.505680  673374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:17:26.523952  673374 addons.go:239] Setting addon default-storageclass=true in "flannel-062409"
	I1213 12:17:26.523991  673374 host.go:66] Checking if "flannel-062409" exists ...
	I1213 12:17:26.524416  673374 cli_runner.go:164] Run: docker container inspect flannel-062409 --format={{.State.Status}}
	I1213 12:17:26.554732  673374 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 12:17:26.561594  673374 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:17:26.561628  673374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 12:17:26.561710  673374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-062409
	I1213 12:17:26.581731  673374 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 12:17:26.581752  673374 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 12:17:26.581812  673374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-062409
	I1213 12:17:26.601662  673374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/flannel-062409/id_rsa Username:docker}
	I1213 12:17:26.627606  673374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/flannel-062409/id_rsa Username:docker}
	I1213 12:17:26.891323  673374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:17:27.005550  673374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 12:17:27.005756  673374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:17:27.056454  673374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 12:17:27.998953  673374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.107549257s)
	I1213 12:17:27.999192  673374 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1213 12:17:28.001462  673374 node_ready.go:35] waiting up to 15m0s for node "flannel-062409" to be "Ready" ...
	I1213 12:17:28.061730  673374 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1213 12:17:28.064692  673374 addons.go:530] duration metric: took 1.571069913s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1213 12:17:28.506061  673374 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-062409" context rescaled to 1 replicas
	W1213 12:17:30.007211  673374 node_ready.go:57] node "flannel-062409" has "Ready":"False" status (will retry)
	I1213 12:17:32.510205  673374 node_ready.go:49] node "flannel-062409" is "Ready"
	I1213 12:17:32.510239  673374 node_ready.go:38] duration metric: took 4.507294802s for node "flannel-062409" to be "Ready" ...
	I1213 12:17:32.510253  673374 api_server.go:52] waiting for apiserver process to appear ...
	I1213 12:17:32.510321  673374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:17:32.525243  673374 api_server.go:72] duration metric: took 6.031963951s to wait for apiserver process to appear ...
	I1213 12:17:32.525269  673374 api_server.go:88] waiting for apiserver healthz status ...
	I1213 12:17:32.525289  673374 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 12:17:32.533699  673374 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1213 12:17:32.534765  673374 api_server.go:141] control plane version: v1.34.2
	I1213 12:17:32.534794  673374 api_server.go:131] duration metric: took 9.517282ms to wait for apiserver health ...
	I1213 12:17:32.534804  673374 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 12:17:32.546354  673374 system_pods.go:59] 7 kube-system pods found
	I1213 12:17:32.546397  673374 system_pods.go:61] "coredns-66bc5c9577-tnshz" [82f3f101-dca9-4461-9242-a147b7b81507] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:17:32.546407  673374 system_pods.go:61] "etcd-flannel-062409" [4e1a2601-97cf-4d8f-9b62-5716015b1d75] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 12:17:32.546417  673374 system_pods.go:61] "kube-apiserver-flannel-062409" [ae043393-825f-41ac-83a8-168551270b4e] Running
	I1213 12:17:32.546422  673374 system_pods.go:61] "kube-controller-manager-flannel-062409" [8d31cb77-50a7-43cb-9cf0-dba5600816bf] Running
	I1213 12:17:32.546427  673374 system_pods.go:61] "kube-proxy-w2k2w" [faa8f713-0617-4453-b51c-a66ad9a0b5cb] Running
	I1213 12:17:32.546434  673374 system_pods.go:61] "kube-scheduler-flannel-062409" [73db4d5d-6017-4f4c-928b-b0fc6759756b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 12:17:32.546439  673374 system_pods.go:61] "storage-provisioner" [8c9eeb18-7fd1-40f0-b557-dbc664fae5fa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 12:17:32.546452  673374 system_pods.go:74] duration metric: took 11.641951ms to wait for pod list to return data ...
	I1213 12:17:32.546465  673374 default_sa.go:34] waiting for default service account to be created ...
	I1213 12:17:32.549255  673374 default_sa.go:45] found service account: "default"
	I1213 12:17:32.549281  673374 default_sa.go:55] duration metric: took 2.810262ms for default service account to be created ...
	I1213 12:17:32.549297  673374 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 12:17:32.554732  673374 system_pods.go:86] 7 kube-system pods found
	I1213 12:17:32.554769  673374 system_pods.go:89] "coredns-66bc5c9577-tnshz" [82f3f101-dca9-4461-9242-a147b7b81507] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:17:32.554778  673374 system_pods.go:89] "etcd-flannel-062409" [4e1a2601-97cf-4d8f-9b62-5716015b1d75] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 12:17:32.554785  673374 system_pods.go:89] "kube-apiserver-flannel-062409" [ae043393-825f-41ac-83a8-168551270b4e] Running
	I1213 12:17:32.554790  673374 system_pods.go:89] "kube-controller-manager-flannel-062409" [8d31cb77-50a7-43cb-9cf0-dba5600816bf] Running
	I1213 12:17:32.554794  673374 system_pods.go:89] "kube-proxy-w2k2w" [faa8f713-0617-4453-b51c-a66ad9a0b5cb] Running
	I1213 12:17:32.554808  673374 system_pods.go:89] "kube-scheduler-flannel-062409" [73db4d5d-6017-4f4c-928b-b0fc6759756b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 12:17:32.554817  673374 system_pods.go:89] "storage-provisioner" [8c9eeb18-7fd1-40f0-b557-dbc664fae5fa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 12:17:32.554848  673374 retry.go:31] will retry after 211.185749ms: missing components: kube-dns
	I1213 12:17:32.775613  673374 system_pods.go:86] 7 kube-system pods found
	I1213 12:17:32.775655  673374 system_pods.go:89] "coredns-66bc5c9577-tnshz" [82f3f101-dca9-4461-9242-a147b7b81507] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:17:32.775664  673374 system_pods.go:89] "etcd-flannel-062409" [4e1a2601-97cf-4d8f-9b62-5716015b1d75] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 12:17:32.775671  673374 system_pods.go:89] "kube-apiserver-flannel-062409" [ae043393-825f-41ac-83a8-168551270b4e] Running
	I1213 12:17:32.775677  673374 system_pods.go:89] "kube-controller-manager-flannel-062409" [8d31cb77-50a7-43cb-9cf0-dba5600816bf] Running
	I1213 12:17:32.775681  673374 system_pods.go:89] "kube-proxy-w2k2w" [faa8f713-0617-4453-b51c-a66ad9a0b5cb] Running
	I1213 12:17:32.775687  673374 system_pods.go:89] "kube-scheduler-flannel-062409" [73db4d5d-6017-4f4c-928b-b0fc6759756b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 12:17:32.775702  673374 system_pods.go:89] "storage-provisioner" [8c9eeb18-7fd1-40f0-b557-dbc664fae5fa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 12:17:32.775722  673374 retry.go:31] will retry after 343.335326ms: missing components: kube-dns
	I1213 12:17:33.122576  673374 system_pods.go:86] 7 kube-system pods found
	I1213 12:17:33.122616  673374 system_pods.go:89] "coredns-66bc5c9577-tnshz" [82f3f101-dca9-4461-9242-a147b7b81507] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:17:33.122625  673374 system_pods.go:89] "etcd-flannel-062409" [4e1a2601-97cf-4d8f-9b62-5716015b1d75] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 12:17:33.122632  673374 system_pods.go:89] "kube-apiserver-flannel-062409" [ae043393-825f-41ac-83a8-168551270b4e] Running
	I1213 12:17:33.122642  673374 system_pods.go:89] "kube-controller-manager-flannel-062409" [8d31cb77-50a7-43cb-9cf0-dba5600816bf] Running
	I1213 12:17:33.122647  673374 system_pods.go:89] "kube-proxy-w2k2w" [faa8f713-0617-4453-b51c-a66ad9a0b5cb] Running
	I1213 12:17:33.122654  673374 system_pods.go:89] "kube-scheduler-flannel-062409" [73db4d5d-6017-4f4c-928b-b0fc6759756b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 12:17:33.122667  673374 system_pods.go:89] "storage-provisioner" [8c9eeb18-7fd1-40f0-b557-dbc664fae5fa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 12:17:33.122687  673374 retry.go:31] will retry after 335.89306ms: missing components: kube-dns
	I1213 12:17:33.462795  673374 system_pods.go:86] 7 kube-system pods found
	I1213 12:17:33.462834  673374 system_pods.go:89] "coredns-66bc5c9577-tnshz" [82f3f101-dca9-4461-9242-a147b7b81507] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:17:33.462844  673374 system_pods.go:89] "etcd-flannel-062409" [4e1a2601-97cf-4d8f-9b62-5716015b1d75] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 12:17:33.462850  673374 system_pods.go:89] "kube-apiserver-flannel-062409" [ae043393-825f-41ac-83a8-168551270b4e] Running
	I1213 12:17:33.462893  673374 system_pods.go:89] "kube-controller-manager-flannel-062409" [8d31cb77-50a7-43cb-9cf0-dba5600816bf] Running
	I1213 12:17:33.462901  673374 system_pods.go:89] "kube-proxy-w2k2w" [faa8f713-0617-4453-b51c-a66ad9a0b5cb] Running
	I1213 12:17:33.462914  673374 system_pods.go:89] "kube-scheduler-flannel-062409" [73db4d5d-6017-4f4c-928b-b0fc6759756b] Running
	I1213 12:17:33.462919  673374 system_pods.go:89] "storage-provisioner" [8c9eeb18-7fd1-40f0-b557-dbc664fae5fa] Running
	I1213 12:17:33.462933  673374 retry.go:31] will retry after 450.239ms: missing components: kube-dns
	I1213 12:17:33.916621  673374 system_pods.go:86] 7 kube-system pods found
	I1213 12:17:33.916660  673374 system_pods.go:89] "coredns-66bc5c9577-tnshz" [82f3f101-dca9-4461-9242-a147b7b81507] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:17:33.916668  673374 system_pods.go:89] "etcd-flannel-062409" [4e1a2601-97cf-4d8f-9b62-5716015b1d75] Running
	I1213 12:17:33.916708  673374 system_pods.go:89] "kube-apiserver-flannel-062409" [ae043393-825f-41ac-83a8-168551270b4e] Running
	I1213 12:17:33.916714  673374 system_pods.go:89] "kube-controller-manager-flannel-062409" [8d31cb77-50a7-43cb-9cf0-dba5600816bf] Running
	I1213 12:17:33.916718  673374 system_pods.go:89] "kube-proxy-w2k2w" [faa8f713-0617-4453-b51c-a66ad9a0b5cb] Running
	I1213 12:17:33.916730  673374 system_pods.go:89] "kube-scheduler-flannel-062409" [73db4d5d-6017-4f4c-928b-b0fc6759756b] Running
	I1213 12:17:33.916735  673374 system_pods.go:89] "storage-provisioner" [8c9eeb18-7fd1-40f0-b557-dbc664fae5fa] Running
	I1213 12:17:33.916759  673374 retry.go:31] will retry after 678.026924ms: missing components: kube-dns
	I1213 12:17:34.599208  673374 system_pods.go:86] 7 kube-system pods found
	I1213 12:17:34.599246  673374 system_pods.go:89] "coredns-66bc5c9577-tnshz" [82f3f101-dca9-4461-9242-a147b7b81507] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:17:34.599253  673374 system_pods.go:89] "etcd-flannel-062409" [4e1a2601-97cf-4d8f-9b62-5716015b1d75] Running
	I1213 12:17:34.599260  673374 system_pods.go:89] "kube-apiserver-flannel-062409" [ae043393-825f-41ac-83a8-168551270b4e] Running
	I1213 12:17:34.599265  673374 system_pods.go:89] "kube-controller-manager-flannel-062409" [8d31cb77-50a7-43cb-9cf0-dba5600816bf] Running
	I1213 12:17:34.599270  673374 system_pods.go:89] "kube-proxy-w2k2w" [faa8f713-0617-4453-b51c-a66ad9a0b5cb] Running
	I1213 12:17:34.599274  673374 system_pods.go:89] "kube-scheduler-flannel-062409" [73db4d5d-6017-4f4c-928b-b0fc6759756b] Running
	I1213 12:17:34.599277  673374 system_pods.go:89] "storage-provisioner" [8c9eeb18-7fd1-40f0-b557-dbc664fae5fa] Running
	I1213 12:17:34.599292  673374 retry.go:31] will retry after 641.894306ms: missing components: kube-dns
	I1213 12:17:35.244970  673374 system_pods.go:86] 7 kube-system pods found
	I1213 12:17:35.245006  673374 system_pods.go:89] "coredns-66bc5c9577-tnshz" [82f3f101-dca9-4461-9242-a147b7b81507] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:17:35.245014  673374 system_pods.go:89] "etcd-flannel-062409" [4e1a2601-97cf-4d8f-9b62-5716015b1d75] Running
	I1213 12:17:35.245020  673374 system_pods.go:89] "kube-apiserver-flannel-062409" [ae043393-825f-41ac-83a8-168551270b4e] Running
	I1213 12:17:35.245025  673374 system_pods.go:89] "kube-controller-manager-flannel-062409" [8d31cb77-50a7-43cb-9cf0-dba5600816bf] Running
	I1213 12:17:35.245030  673374 system_pods.go:89] "kube-proxy-w2k2w" [faa8f713-0617-4453-b51c-a66ad9a0b5cb] Running
	I1213 12:17:35.245034  673374 system_pods.go:89] "kube-scheduler-flannel-062409" [73db4d5d-6017-4f4c-928b-b0fc6759756b] Running
	I1213 12:17:35.245038  673374 system_pods.go:89] "storage-provisioner" [8c9eeb18-7fd1-40f0-b557-dbc664fae5fa] Running
	I1213 12:17:35.245051  673374 retry.go:31] will retry after 964.356991ms: missing components: kube-dns
	I1213 12:17:36.213267  673374 system_pods.go:86] 7 kube-system pods found
	I1213 12:17:36.213306  673374 system_pods.go:89] "coredns-66bc5c9577-tnshz" [82f3f101-dca9-4461-9242-a147b7b81507] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:17:36.213314  673374 system_pods.go:89] "etcd-flannel-062409" [4e1a2601-97cf-4d8f-9b62-5716015b1d75] Running
	I1213 12:17:36.213320  673374 system_pods.go:89] "kube-apiserver-flannel-062409" [ae043393-825f-41ac-83a8-168551270b4e] Running
	I1213 12:17:36.213324  673374 system_pods.go:89] "kube-controller-manager-flannel-062409" [8d31cb77-50a7-43cb-9cf0-dba5600816bf] Running
	I1213 12:17:36.213328  673374 system_pods.go:89] "kube-proxy-w2k2w" [faa8f713-0617-4453-b51c-a66ad9a0b5cb] Running
	I1213 12:17:36.213333  673374 system_pods.go:89] "kube-scheduler-flannel-062409" [73db4d5d-6017-4f4c-928b-b0fc6759756b] Running
	I1213 12:17:36.213337  673374 system_pods.go:89] "storage-provisioner" [8c9eeb18-7fd1-40f0-b557-dbc664fae5fa] Running
	I1213 12:17:36.213351  673374 retry.go:31] will retry after 1.01847872s: missing components: kube-dns
	I1213 12:17:37.236194  673374 system_pods.go:86] 7 kube-system pods found
	I1213 12:17:37.236229  673374 system_pods.go:89] "coredns-66bc5c9577-tnshz" [82f3f101-dca9-4461-9242-a147b7b81507] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:17:37.236237  673374 system_pods.go:89] "etcd-flannel-062409" [4e1a2601-97cf-4d8f-9b62-5716015b1d75] Running
	I1213 12:17:37.236243  673374 system_pods.go:89] "kube-apiserver-flannel-062409" [ae043393-825f-41ac-83a8-168551270b4e] Running
	I1213 12:17:37.236249  673374 system_pods.go:89] "kube-controller-manager-flannel-062409" [8d31cb77-50a7-43cb-9cf0-dba5600816bf] Running
	I1213 12:17:37.236253  673374 system_pods.go:89] "kube-proxy-w2k2w" [faa8f713-0617-4453-b51c-a66ad9a0b5cb] Running
	I1213 12:17:37.236257  673374 system_pods.go:89] "kube-scheduler-flannel-062409" [73db4d5d-6017-4f4c-928b-b0fc6759756b] Running
	I1213 12:17:37.236261  673374 system_pods.go:89] "storage-provisioner" [8c9eeb18-7fd1-40f0-b557-dbc664fae5fa] Running
	I1213 12:17:37.236275  673374 retry.go:31] will retry after 1.452522265s: missing components: kube-dns
	I1213 12:17:38.692384  673374 system_pods.go:86] 7 kube-system pods found
	I1213 12:17:38.692426  673374 system_pods.go:89] "coredns-66bc5c9577-tnshz" [82f3f101-dca9-4461-9242-a147b7b81507] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:17:38.692434  673374 system_pods.go:89] "etcd-flannel-062409" [4e1a2601-97cf-4d8f-9b62-5716015b1d75] Running
	I1213 12:17:38.692441  673374 system_pods.go:89] "kube-apiserver-flannel-062409" [ae043393-825f-41ac-83a8-168551270b4e] Running
	I1213 12:17:38.692446  673374 system_pods.go:89] "kube-controller-manager-flannel-062409" [8d31cb77-50a7-43cb-9cf0-dba5600816bf] Running
	I1213 12:17:38.692451  673374 system_pods.go:89] "kube-proxy-w2k2w" [faa8f713-0617-4453-b51c-a66ad9a0b5cb] Running
	I1213 12:17:38.692455  673374 system_pods.go:89] "kube-scheduler-flannel-062409" [73db4d5d-6017-4f4c-928b-b0fc6759756b] Running
	I1213 12:17:38.692459  673374 system_pods.go:89] "storage-provisioner" [8c9eeb18-7fd1-40f0-b557-dbc664fae5fa] Running
	I1213 12:17:38.692474  673374 retry.go:31] will retry after 1.960458289s: missing components: kube-dns
	I1213 12:17:40.657997  673374 system_pods.go:86] 7 kube-system pods found
	I1213 12:17:40.658036  673374 system_pods.go:89] "coredns-66bc5c9577-tnshz" [82f3f101-dca9-4461-9242-a147b7b81507] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:17:40.658044  673374 system_pods.go:89] "etcd-flannel-062409" [4e1a2601-97cf-4d8f-9b62-5716015b1d75] Running
	I1213 12:17:40.658050  673374 system_pods.go:89] "kube-apiserver-flannel-062409" [ae043393-825f-41ac-83a8-168551270b4e] Running
	I1213 12:17:40.658056  673374 system_pods.go:89] "kube-controller-manager-flannel-062409" [8d31cb77-50a7-43cb-9cf0-dba5600816bf] Running
	I1213 12:17:40.658063  673374 system_pods.go:89] "kube-proxy-w2k2w" [faa8f713-0617-4453-b51c-a66ad9a0b5cb] Running
	I1213 12:17:40.658067  673374 system_pods.go:89] "kube-scheduler-flannel-062409" [73db4d5d-6017-4f4c-928b-b0fc6759756b] Running
	I1213 12:17:40.658071  673374 system_pods.go:89] "storage-provisioner" [8c9eeb18-7fd1-40f0-b557-dbc664fae5fa] Running
	I1213 12:17:40.658092  673374 retry.go:31] will retry after 2.740402054s: missing components: kube-dns
	I1213 12:17:43.402283  673374 system_pods.go:86] 7 kube-system pods found
	I1213 12:17:43.402318  673374 system_pods.go:89] "coredns-66bc5c9577-tnshz" [82f3f101-dca9-4461-9242-a147b7b81507] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:17:43.402327  673374 system_pods.go:89] "etcd-flannel-062409" [4e1a2601-97cf-4d8f-9b62-5716015b1d75] Running
	I1213 12:17:43.402333  673374 system_pods.go:89] "kube-apiserver-flannel-062409" [ae043393-825f-41ac-83a8-168551270b4e] Running
	I1213 12:17:43.402337  673374 system_pods.go:89] "kube-controller-manager-flannel-062409" [8d31cb77-50a7-43cb-9cf0-dba5600816bf] Running
	I1213 12:17:43.402342  673374 system_pods.go:89] "kube-proxy-w2k2w" [faa8f713-0617-4453-b51c-a66ad9a0b5cb] Running
	I1213 12:17:43.402346  673374 system_pods.go:89] "kube-scheduler-flannel-062409" [73db4d5d-6017-4f4c-928b-b0fc6759756b] Running
	I1213 12:17:43.402350  673374 system_pods.go:89] "storage-provisioner" [8c9eeb18-7fd1-40f0-b557-dbc664fae5fa] Running
	I1213 12:17:43.402364  673374 retry.go:31] will retry after 3.518648046s: missing components: kube-dns
	I1213 12:17:46.924586  673374 system_pods.go:86] 7 kube-system pods found
	I1213 12:17:46.924623  673374 system_pods.go:89] "coredns-66bc5c9577-tnshz" [82f3f101-dca9-4461-9242-a147b7b81507] Running
	I1213 12:17:46.924630  673374 system_pods.go:89] "etcd-flannel-062409" [4e1a2601-97cf-4d8f-9b62-5716015b1d75] Running
	I1213 12:17:46.924635  673374 system_pods.go:89] "kube-apiserver-flannel-062409" [ae043393-825f-41ac-83a8-168551270b4e] Running
	I1213 12:17:46.924639  673374 system_pods.go:89] "kube-controller-manager-flannel-062409" [8d31cb77-50a7-43cb-9cf0-dba5600816bf] Running
	I1213 12:17:46.924644  673374 system_pods.go:89] "kube-proxy-w2k2w" [faa8f713-0617-4453-b51c-a66ad9a0b5cb] Running
	I1213 12:17:46.924648  673374 system_pods.go:89] "kube-scheduler-flannel-062409" [73db4d5d-6017-4f4c-928b-b0fc6759756b] Running
	I1213 12:17:46.924652  673374 system_pods.go:89] "storage-provisioner" [8c9eeb18-7fd1-40f0-b557-dbc664fae5fa] Running
	I1213 12:17:46.924661  673374 system_pods.go:126] duration metric: took 14.375357841s to wait for k8s-apps to be running ...
	I1213 12:17:46.924673  673374 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 12:17:46.924737  673374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 12:17:46.938221  673374 system_svc.go:56] duration metric: took 13.537993ms WaitForService to wait for kubelet
	I1213 12:17:46.938251  673374 kubeadm.go:587] duration metric: took 20.444978328s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 12:17:46.938270  673374 node_conditions.go:102] verifying NodePressure condition ...
	I1213 12:17:46.941075  673374 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 12:17:46.941112  673374 node_conditions.go:123] node cpu capacity is 2
	I1213 12:17:46.941129  673374 node_conditions.go:105] duration metric: took 2.851288ms to run NodePressure ...
	I1213 12:17:46.941142  673374 start.go:242] waiting for startup goroutines ...
	I1213 12:17:46.941150  673374 start.go:247] waiting for cluster config update ...
	I1213 12:17:46.941161  673374 start.go:256] writing updated cluster config ...
	I1213 12:17:46.941445  673374 ssh_runner.go:195] Run: rm -f paused
	I1213 12:17:46.945493  673374 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 12:17:46.949558  673374 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tnshz" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:17:46.954612  673374 pod_ready.go:94] pod "coredns-66bc5c9577-tnshz" is "Ready"
	I1213 12:17:46.954645  673374 pod_ready.go:86] duration metric: took 5.060652ms for pod "coredns-66bc5c9577-tnshz" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:17:46.957296  673374 pod_ready.go:83] waiting for pod "etcd-flannel-062409" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:17:46.961854  673374 pod_ready.go:94] pod "etcd-flannel-062409" is "Ready"
	I1213 12:17:46.961882  673374 pod_ready.go:86] duration metric: took 4.559194ms for pod "etcd-flannel-062409" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:17:46.964137  673374 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-062409" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:17:46.969145  673374 pod_ready.go:94] pod "kube-apiserver-flannel-062409" is "Ready"
	I1213 12:17:46.969220  673374 pod_ready.go:86] duration metric: took 5.052856ms for pod "kube-apiserver-flannel-062409" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:17:46.971683  673374 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-062409" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:17:47.349128  673374 pod_ready.go:94] pod "kube-controller-manager-flannel-062409" is "Ready"
	I1213 12:17:47.349155  673374 pod_ready.go:86] duration metric: took 377.405089ms for pod "kube-controller-manager-flannel-062409" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:17:47.549695  673374 pod_ready.go:83] waiting for pod "kube-proxy-w2k2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:17:47.950016  673374 pod_ready.go:94] pod "kube-proxy-w2k2w" is "Ready"
	I1213 12:17:47.950042  673374 pod_ready.go:86] duration metric: took 400.321239ms for pod "kube-proxy-w2k2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:17:48.150190  673374 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-062409" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:17:48.550564  673374 pod_ready.go:94] pod "kube-scheduler-flannel-062409" is "Ready"
	I1213 12:17:48.550657  673374 pod_ready.go:86] duration metric: took 400.428377ms for pod "kube-scheduler-flannel-062409" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:17:48.550687  673374 pod_ready.go:40] duration metric: took 1.605158304s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 12:17:48.613108  673374 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1213 12:17:48.616389  673374 out.go:179] * Done! kubectl is now configured to use "flannel-062409" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.522775611Z" level=info msg="Using the internal default seccomp profile"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.522787123Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.522793498Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.522799176Z" level=info msg="RDT not available in the host system"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.522824374Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.523715533Z" level=info msg="Conmon does support the --sync option"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.523753753Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.523772756Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.524439847Z" level=info msg="Conmon does support the --sync option"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.524461181Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.52461968Z" level=info msg="Updated default CNI network name to "
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.525403671Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci
/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_m
emory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_di
r = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [cr
io.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.529256513Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.529355665Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576580003Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576617025Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576659233Z" level=info msg="Create NRI interface"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576753674Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576762569Z" level=info msg="runtime interface created"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576773646Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576779767Z" level=info msg="runtime interface starting up..."
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576785831Z" level=info msg="starting plugins..."
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576798393Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576863575Z" level=info msg="No systemd watchdog enabled"
	Dec 13 12:03:09 no-preload-307409 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:18:20.306885    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:18:20.308169    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:18:20.308866    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:18:20.310549    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:18:20.310817    8211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec13 11:26] overlayfs: idmapped layers are currently not supported
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	[Dec13 11:46] overlayfs: idmapped layers are currently not supported
	[ +24.639766] overlayfs: idmapped layers are currently not supported
	[ +18.732422] overlayfs: idmapped layers are currently not supported
	[Dec13 11:47] overlayfs: idmapped layers are currently not supported
	[Dec13 11:48] overlayfs: idmapped layers are currently not supported
	[Dec13 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.618483] overlayfs: idmapped layers are currently not supported
	[Dec13 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.749488] overlayfs: idmapped layers are currently not supported
	[Dec13 11:52] overlayfs: idmapped layers are currently not supported
	[Dec13 12:09] overlayfs: idmapped layers are currently not supported
	[Dec13 12:11] overlayfs: idmapped layers are currently not supported
	[Dec13 12:12] overlayfs: idmapped layers are currently not supported
	[Dec13 12:14] overlayfs: idmapped layers are currently not supported
	[Dec13 12:15] overlayfs: idmapped layers are currently not supported
	[Dec13 12:17] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 12:18:20 up  4:00,  0 user,  load average: 1.39, 1.58, 1.47
	Linux no-preload-307409 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 12:18:17 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:18:18 no-preload-307409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1209.
	Dec 13 12:18:18 no-preload-307409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:18:18 no-preload-307409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:18:18 no-preload-307409 kubelet[8080]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:18:18 no-preload-307409 kubelet[8080]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:18:18 no-preload-307409 kubelet[8080]: E1213 12:18:18.610748    8080 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:18:18 no-preload-307409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:18:18 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:18:19 no-preload-307409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1210.
	Dec 13 12:18:19 no-preload-307409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:18:19 no-preload-307409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:18:19 no-preload-307409 kubelet[8114]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:18:19 no-preload-307409 kubelet[8114]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:18:19 no-preload-307409 kubelet[8114]: E1213 12:18:19.322598    8114 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:18:19 no-preload-307409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:18:19 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:18:20 no-preload-307409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1211.
	Dec 13 12:18:20 no-preload-307409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:18:20 no-preload-307409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:18:20 no-preload-307409 kubelet[8180]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:18:20 no-preload-307409 kubelet[8180]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:18:20 no-preload-307409 kubelet[8180]: E1213 12:18:20.158309    8180 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:18:20 no-preload-307409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:18:20 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-307409 -n no-preload-307409
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-307409 -n no-preload-307409: exit status 2 (455.923844ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-307409" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (242.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:18:21.747345  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/calico-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:18:21.754095  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/calico-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:18:21.765633  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/calico-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:18:21.787061  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/calico-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:18:21.828410  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/calico-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:18:21.909767  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/calico-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:18:22.070981  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/calico-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:18:22.393206  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/calico-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:18:23.036311  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/calico-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:18:24.317782  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/calico-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:18:32.001532  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/calico-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:18:42.243790  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/calico-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:18:49.721303  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:19:02.725082  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/calico-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:19:06.640071  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:19:21.275821  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kindnet-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:19:43.686640  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/calico-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1213 12:19:46.054163  356328 config.go:182] Loaded profile config "bridge-062409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:19:55.299734  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/custom-flannel-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:19:55.306118  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/custom-flannel-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:19:55.317637  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/custom-flannel-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:19:55.340109  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/custom-flannel-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:19:55.381467  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/custom-flannel-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:19:55.462987  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/custom-flannel-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:19:55.624834  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/custom-flannel-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:19:55.947042  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/custom-flannel-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:19:57.875052  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/custom-flannel-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:20:00.436901  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/custom-flannel-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:20:05.558865  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/custom-flannel-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:20:15.800595  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/custom-flannel-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:20:36.282425  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/custom-flannel-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:20:42.365039  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/auto-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:20:44.682382  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/default-k8s-diff-port-151605/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:21:05.608411  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/calico-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:21:17.244494  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/custom-flannel-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:21:22.208946  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/enable-default-cni-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:21:22.215422  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/enable-default-cni-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:21:22.226924  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/enable-default-cni-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:21:22.248325  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/enable-default-cni-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:21:22.289788  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/enable-default-cni-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:21:22.371206  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/enable-default-cni-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:21:22.532760  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/enable-default-cni-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 12:21:22.854534  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/enable-default-cni-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:21:23.496022  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/enable-default-cni-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:21:24.777638  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/enable-default-cni-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:21:27.339809  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/enable-default-cni-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:21:32.462138  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/enable-default-cni-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:21:37.412849  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kindnet-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:21:42.704375  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/enable-default-cni-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:22:00.470883  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:22:03.186712  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/enable-default-cni-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1213 12:22:05.117767  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kindnet-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-307409 -n no-preload-307409
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-307409 -n no-preload-307409: exit status 2 (303.624536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-307409" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-307409 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-307409 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.674µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-307409 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-307409
helpers_test.go:244: (dbg) docker inspect no-preload-307409:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a",
	        "Created": "2025-12-13T11:52:23.357834479Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 623056,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T12:03:03.340968033Z",
	            "FinishedAt": "2025-12-13T12:03:01.976500099Z"
	        },
	        "Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
	        "ResolvConfPath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/hostname",
	        "HostsPath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/hosts",
	        "LogPath": "/var/lib/docker/containers/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a/9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a-json.log",
	        "Name": "/no-preload-307409",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-307409:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-307409",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9fe6186bf0c84a16b285df9e7b2b247216c0a242e547316b66ecaf0754ce555a",
	                "LowerDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35-init/diff:/var/lib/docker/overlay2/035e51a8b51aaf3a94025ceca49891727cbd38e4de9c592f17e355e13bea0ebf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b049336bc6fd29f679cf8976a7bec8b87044377a6cc96e4ed0dfb3230dc5be35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-307409",
	                "Source": "/var/lib/docker/volumes/no-preload-307409/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-307409",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-307409",
	                "name.minikube.sigs.k8s.io": "no-preload-307409",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c126f047073986da1996efceb8a3e932bcfa233495a4aa62f7ff0993488c461e",
	            "SandboxKey": "/var/run/docker/netns/c126f0470739",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-307409": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:b6:08:7b:b6:bb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "280e424abad6162e6fbaaf316b3c6095ab0d80a59a1f82eb556a84b2dd4f139a",
	                    "EndpointID": "012a611abbc58ce4e9989db1baedc5a39d41b5ffd347c4e9d8cd59dee05ce5c5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-307409",
	                        "9fe6186bf0c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-307409 -n no-preload-307409
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-307409 -n no-preload-307409: exit status 2 (308.093144ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-307409 logs -n 25
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-062409 sudo iptables -t nat -L -n -v                                 │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │ 13 Dec 25 12:20 UTC │
	│ ssh     │ -p bridge-062409 sudo systemctl status kubelet --all --full --no-pager         │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │ 13 Dec 25 12:20 UTC │
	│ ssh     │ -p bridge-062409 sudo systemctl cat kubelet --no-pager                         │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │ 13 Dec 25 12:20 UTC │
	│ ssh     │ -p bridge-062409 sudo journalctl -xeu kubelet --all --full --no-pager          │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │ 13 Dec 25 12:20 UTC │
	│ ssh     │ -p bridge-062409 sudo cat /etc/kubernetes/kubelet.conf                         │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │ 13 Dec 25 12:20 UTC │
	│ ssh     │ -p bridge-062409 sudo cat /var/lib/kubelet/config.yaml                         │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │ 13 Dec 25 12:20 UTC │
	│ ssh     │ -p bridge-062409 sudo systemctl status docker --all --full --no-pager          │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │                     │
	│ ssh     │ -p bridge-062409 sudo systemctl cat docker --no-pager                          │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │ 13 Dec 25 12:20 UTC │
	│ ssh     │ -p bridge-062409 sudo cat /etc/docker/daemon.json                              │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │                     │
	│ ssh     │ -p bridge-062409 sudo docker system info                                       │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │                     │
	│ ssh     │ -p bridge-062409 sudo systemctl status cri-docker --all --full --no-pager      │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │                     │
	│ ssh     │ -p bridge-062409 sudo systemctl cat cri-docker --no-pager                      │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │ 13 Dec 25 12:20 UTC │
	│ ssh     │ -p bridge-062409 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │                     │
	│ ssh     │ -p bridge-062409 sudo cat /usr/lib/systemd/system/cri-docker.service           │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │ 13 Dec 25 12:20 UTC │
	│ ssh     │ -p bridge-062409 sudo cri-dockerd --version                                    │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │ 13 Dec 25 12:20 UTC │
	│ ssh     │ -p bridge-062409 sudo systemctl status containerd --all --full --no-pager      │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │                     │
	│ ssh     │ -p bridge-062409 sudo systemctl cat containerd --no-pager                      │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │ 13 Dec 25 12:20 UTC │
	│ ssh     │ -p bridge-062409 sudo cat /lib/systemd/system/containerd.service               │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │ 13 Dec 25 12:20 UTC │
	│ ssh     │ -p bridge-062409 sudo cat /etc/containerd/config.toml                          │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │ 13 Dec 25 12:20 UTC │
	│ ssh     │ -p bridge-062409 sudo containerd config dump                                   │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │ 13 Dec 25 12:20 UTC │
	│ ssh     │ -p bridge-062409 sudo systemctl status crio --all --full --no-pager            │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │ 13 Dec 25 12:20 UTC │
	│ ssh     │ -p bridge-062409 sudo systemctl cat crio --no-pager                            │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │ 13 Dec 25 12:20 UTC │
	│ ssh     │ -p bridge-062409 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │ 13 Dec 25 12:20 UTC │
	│ ssh     │ -p bridge-062409 sudo crio config                                              │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │ 13 Dec 25 12:20 UTC │
	│ delete  │ -p bridge-062409                                                               │ bridge-062409 │ jenkins │ v1.37.0 │ 13 Dec 25 12:20 UTC │ 13 Dec 25 12:20 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 12:18:28
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 12:18:28.683440  680422 out.go:360] Setting OutFile to fd 1 ...
	I1213 12:18:28.683648  680422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:18:28.683680  680422 out.go:374] Setting ErrFile to fd 2...
	I1213 12:18:28.683703  680422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 12:18:28.683946  680422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 12:18:28.684406  680422 out.go:368] Setting JSON to false
	I1213 12:18:28.685344  680422 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14461,"bootTime":1765613848,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 12:18:28.685443  680422 start.go:143] virtualization:  
	I1213 12:18:28.689609  680422 out.go:179] * [bridge-062409] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 12:18:28.694018  680422 notify.go:221] Checking for updates...
	I1213 12:18:28.697425  680422 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 12:18:28.700882  680422 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 12:18:28.704067  680422 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:18:28.707200  680422 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 12:18:28.710330  680422 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 12:18:28.713317  680422 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 12:18:28.716923  680422 config.go:182] Loaded profile config "no-preload-307409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 12:18:28.717030  680422 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 12:18:28.752169  680422 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 12:18:28.752342  680422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:18:28.811361  680422 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 12:18:28.800663079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:18:28.811494  680422 docker.go:319] overlay module found
	I1213 12:18:28.816725  680422 out.go:179] * Using the docker driver based on user configuration
	I1213 12:18:28.819734  680422 start.go:309] selected driver: docker
	I1213 12:18:28.819757  680422 start.go:927] validating driver "docker" against <nil>
	I1213 12:18:28.819771  680422 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 12:18:28.820498  680422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 12:18:28.874133  680422 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 12:18:28.865109845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 12:18:28.874282  680422 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 12:18:28.874516  680422 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 12:18:28.877590  680422 out.go:179] * Using Docker driver with root privileges
	I1213 12:18:28.880468  680422 cni.go:84] Creating CNI manager for "bridge"
	I1213 12:18:28.880493  680422 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1213 12:18:28.880586  680422 start.go:353] cluster config:
	{Name:bridge-062409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-062409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:18:28.885615  680422 out.go:179] * Starting "bridge-062409" primary control-plane node in "bridge-062409" cluster
	I1213 12:18:28.888458  680422 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 12:18:28.891411  680422 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 12:18:28.894311  680422 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 12:18:28.894373  680422 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1213 12:18:28.894389  680422 cache.go:65] Caching tarball of preloaded images
	I1213 12:18:28.894396  680422 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 12:18:28.894496  680422 preload.go:238] Found /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 12:18:28.894508  680422 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 12:18:28.894628  680422 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/config.json ...
	I1213 12:18:28.894655  680422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/config.json: {Name:mk6a7da95216000a7c4d3016a3c2dbab43939b1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:18:28.913816  680422 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 12:18:28.913839  680422 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 12:18:28.913861  680422 cache.go:243] Successfully downloaded all kic artifacts
	I1213 12:18:28.913892  680422 start.go:360] acquireMachinesLock for bridge-062409: {Name:mk8bfde0622b406c827497205fb469f8506bd468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 12:18:28.913994  680422 start.go:364] duration metric: took 80.363µs to acquireMachinesLock for "bridge-062409"
	I1213 12:18:28.914023  680422 start.go:93] Provisioning new machine with config: &{Name:bridge-062409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-062409 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 12:18:28.914094  680422 start.go:125] createHost starting for "" (driver="docker")
	I1213 12:18:28.917565  680422 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 12:18:28.917811  680422 start.go:159] libmachine.API.Create for "bridge-062409" (driver="docker")
	I1213 12:18:28.917849  680422 client.go:173] LocalClient.Create starting
	I1213 12:18:28.917913  680422 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem
	I1213 12:18:28.917948  680422 main.go:143] libmachine: Decoding PEM data...
	I1213 12:18:28.917974  680422 main.go:143] libmachine: Parsing certificate...
	I1213 12:18:28.918032  680422 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem
	I1213 12:18:28.918058  680422 main.go:143] libmachine: Decoding PEM data...
	I1213 12:18:28.918074  680422 main.go:143] libmachine: Parsing certificate...
	I1213 12:18:28.918431  680422 cli_runner.go:164] Run: docker network inspect bridge-062409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 12:18:28.933536  680422 cli_runner.go:211] docker network inspect bridge-062409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 12:18:28.933622  680422 network_create.go:284] running [docker network inspect bridge-062409] to gather additional debugging logs...
	I1213 12:18:28.933645  680422 cli_runner.go:164] Run: docker network inspect bridge-062409
	W1213 12:18:28.950063  680422 cli_runner.go:211] docker network inspect bridge-062409 returned with exit code 1
	I1213 12:18:28.950096  680422 network_create.go:287] error running [docker network inspect bridge-062409]: docker network inspect bridge-062409: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network bridge-062409 not found
	I1213 12:18:28.950110  680422 network_create.go:289] output of [docker network inspect bridge-062409]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network bridge-062409 not found
	
	** /stderr **
	I1213 12:18:28.950211  680422 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 12:18:28.967018  680422 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0545902499c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:32:4c:cb:8d:7b} reservation:<nil>}
	I1213 12:18:28.967419  680422 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-de5fe2fbe3b8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:54:47:7f:e7:3a} reservation:<nil>}
	I1213 12:18:28.967703  680422 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7c96683190e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:0a:60:46:c5:4a} reservation:<nil>}
	I1213 12:18:28.968184  680422 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400199b1c0}
	I1213 12:18:28.968207  680422 network_create.go:124] attempt to create docker network bridge-062409 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1213 12:18:28.968260  680422 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-062409 bridge-062409
	I1213 12:18:29.043807  680422 network_create.go:108] docker network bridge-062409 192.168.76.0/24 created
	I1213 12:18:29.043865  680422 kic.go:121] calculated static IP "192.168.76.2" for the "bridge-062409" container
	I1213 12:18:29.043944  680422 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 12:18:29.062435  680422 cli_runner.go:164] Run: docker volume create bridge-062409 --label name.minikube.sigs.k8s.io=bridge-062409 --label created_by.minikube.sigs.k8s.io=true
	I1213 12:18:29.082950  680422 oci.go:103] Successfully created a docker volume bridge-062409
	I1213 12:18:29.083036  680422 cli_runner.go:164] Run: docker run --rm --name bridge-062409-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-062409 --entrypoint /usr/bin/test -v bridge-062409:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 12:18:29.646140  680422 oci.go:107] Successfully prepared a docker volume bridge-062409
	I1213 12:18:29.646211  680422 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 12:18:29.646223  680422 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 12:18:29.646288  680422 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v bridge-062409:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 12:18:33.719495  680422 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v bridge-062409:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.073162651s)
	I1213 12:18:33.719571  680422 kic.go:203] duration metric: took 4.073345514s to extract preloaded images to volume ...
	W1213 12:18:33.719708  680422 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 12:18:33.719809  680422 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 12:18:33.771992  680422 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-062409 --name bridge-062409 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-062409 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-062409 --network bridge-062409 --ip 192.168.76.2 --volume bridge-062409:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 12:18:34.090145  680422 cli_runner.go:164] Run: docker container inspect bridge-062409 --format={{.State.Running}}
	I1213 12:18:34.110413  680422 cli_runner.go:164] Run: docker container inspect bridge-062409 --format={{.State.Status}}
	I1213 12:18:34.130653  680422 cli_runner.go:164] Run: docker exec bridge-062409 stat /var/lib/dpkg/alternatives/iptables
	I1213 12:18:34.186094  680422 oci.go:144] the created container "bridge-062409" has a running status.
	I1213 12:18:34.186129  680422 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/bridge-062409/id_rsa...
	I1213 12:18:34.298495  680422 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-354468/.minikube/machines/bridge-062409/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 12:18:34.341793  680422 cli_runner.go:164] Run: docker container inspect bridge-062409 --format={{.State.Status}}
	I1213 12:18:34.368711  680422 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 12:18:34.368738  680422 kic_runner.go:114] Args: [docker exec --privileged bridge-062409 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 12:18:34.421280  680422 cli_runner.go:164] Run: docker container inspect bridge-062409 --format={{.State.Status}}
	I1213 12:18:34.440892  680422 machine.go:94] provisionDockerMachine start ...
	I1213 12:18:34.440997  680422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-062409
	I1213 12:18:34.467691  680422 main.go:143] libmachine: Using SSH client type: native
	I1213 12:18:34.468041  680422 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33509 <nil> <nil>}
	I1213 12:18:34.468051  680422 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 12:18:34.471756  680422 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1213 12:18:37.619113  680422 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-062409
	
	I1213 12:18:37.619140  680422 ubuntu.go:182] provisioning hostname "bridge-062409"
	I1213 12:18:37.619206  680422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-062409
	I1213 12:18:37.636593  680422 main.go:143] libmachine: Using SSH client type: native
	I1213 12:18:37.636916  680422 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33509 <nil> <nil>}
	I1213 12:18:37.636933  680422 main.go:143] libmachine: About to run SSH command:
	sudo hostname bridge-062409 && echo "bridge-062409" | sudo tee /etc/hostname
	I1213 12:18:37.797057  680422 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-062409
	
	I1213 12:18:37.797220  680422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-062409
	I1213 12:18:37.814027  680422 main.go:143] libmachine: Using SSH client type: native
	I1213 12:18:37.814348  680422 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33509 <nil> <nil>}
	I1213 12:18:37.814364  680422 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-062409' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-062409/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-062409' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 12:18:37.967717  680422 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 12:18:37.967745  680422 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-354468/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-354468/.minikube}
	I1213 12:18:37.967765  680422 ubuntu.go:190] setting up certificates
	I1213 12:18:37.967804  680422 provision.go:84] configureAuth start
	I1213 12:18:37.967893  680422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-062409
	I1213 12:18:37.984468  680422 provision.go:143] copyHostCerts
	I1213 12:18:37.984536  680422 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem, removing ...
	I1213 12:18:37.984548  680422 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem
	I1213 12:18:37.984632  680422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/ca.pem (1082 bytes)
	I1213 12:18:37.984733  680422 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem, removing ...
	I1213 12:18:37.984750  680422 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem
	I1213 12:18:37.984780  680422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/cert.pem (1123 bytes)
	I1213 12:18:37.984836  680422 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem, removing ...
	I1213 12:18:37.984846  680422 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem
	I1213 12:18:37.984874  680422 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-354468/.minikube/key.pem (1679 bytes)
	I1213 12:18:37.984925  680422 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem org=jenkins.bridge-062409 san=[127.0.0.1 192.168.76.2 bridge-062409 localhost minikube]
	I1213 12:18:38.371294  680422 provision.go:177] copyRemoteCerts
	I1213 12:18:38.371361  680422 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 12:18:38.371406  680422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-062409
	I1213 12:18:38.388353  680422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/bridge-062409/id_rsa Username:docker}
	I1213 12:18:38.492647  680422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 12:18:38.510277  680422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 12:18:38.527614  680422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 12:18:38.545311  680422 provision.go:87] duration metric: took 577.476416ms to configureAuth
	I1213 12:18:38.545341  680422 ubuntu.go:206] setting minikube options for container-runtime
	I1213 12:18:38.545528  680422 config.go:182] Loaded profile config "bridge-062409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 12:18:38.545647  680422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-062409
	I1213 12:18:38.562751  680422 main.go:143] libmachine: Using SSH client type: native
	I1213 12:18:38.563066  680422 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33509 <nil> <nil>}
	I1213 12:18:38.563079  680422 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 12:18:38.887069  680422 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 12:18:38.887091  680422 machine.go:97] duration metric: took 4.446178362s to provisionDockerMachine
	I1213 12:18:38.887102  680422 client.go:176] duration metric: took 9.969243051s to LocalClient.Create
	I1213 12:18:38.887115  680422 start.go:167] duration metric: took 9.969305747s to libmachine.API.Create "bridge-062409"
	I1213 12:18:38.887122  680422 start.go:293] postStartSetup for "bridge-062409" (driver="docker")
	I1213 12:18:38.887132  680422 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 12:18:38.887206  680422 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 12:18:38.887249  680422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-062409
	I1213 12:18:38.905228  680422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/bridge-062409/id_rsa Username:docker}
	I1213 12:18:39.012733  680422 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 12:18:39.016353  680422 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 12:18:39.016382  680422 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 12:18:39.016394  680422 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/addons for local assets ...
	I1213 12:18:39.016447  680422 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-354468/.minikube/files for local assets ...
	I1213 12:18:39.016532  680422 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem -> 3563282.pem in /etc/ssl/certs
	I1213 12:18:39.016638  680422 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 12:18:39.024229  680422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 12:18:39.042120  680422 start.go:296] duration metric: took 154.982687ms for postStartSetup
	I1213 12:18:39.042540  680422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-062409
	I1213 12:18:39.060682  680422 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/config.json ...
	I1213 12:18:39.060970  680422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 12:18:39.061020  680422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-062409
	I1213 12:18:39.078661  680422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/bridge-062409/id_rsa Username:docker}
	I1213 12:18:39.180640  680422 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 12:18:39.185676  680422 start.go:128] duration metric: took 10.271560289s to createHost
	I1213 12:18:39.185703  680422 start.go:83] releasing machines lock for "bridge-062409", held for 10.271694396s
	I1213 12:18:39.185791  680422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-062409
	I1213 12:18:39.202315  680422 ssh_runner.go:195] Run: cat /version.json
	I1213 12:18:39.202361  680422 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 12:18:39.202374  680422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-062409
	I1213 12:18:39.202431  680422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-062409
	I1213 12:18:39.227738  680422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/bridge-062409/id_rsa Username:docker}
	I1213 12:18:39.232288  680422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/bridge-062409/id_rsa Username:docker}
	I1213 12:18:39.331177  680422 ssh_runner.go:195] Run: systemctl --version
	I1213 12:18:39.419183  680422 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 12:18:39.454878  680422 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 12:18:39.459302  680422 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 12:18:39.459377  680422 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 12:18:39.489097  680422 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1213 12:18:39.489177  680422 start.go:496] detecting cgroup driver to use...
	I1213 12:18:39.489241  680422 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 12:18:39.489339  680422 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 12:18:39.506466  680422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 12:18:39.519822  680422 docker.go:218] disabling cri-docker service (if available) ...
	I1213 12:18:39.519882  680422 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 12:18:39.541809  680422 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 12:18:39.564036  680422 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 12:18:39.683839  680422 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 12:18:39.809052  680422 docker.go:234] disabling docker service ...
	I1213 12:18:39.809119  680422 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 12:18:39.831182  680422 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 12:18:39.844464  680422 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 12:18:39.962978  680422 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 12:18:40.105479  680422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 12:18:40.119825  680422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 12:18:40.135459  680422 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 12:18:40.135569  680422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:18:40.145242  680422 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 12:18:40.145373  680422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:18:40.154187  680422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:18:40.162860  680422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:18:40.172177  680422 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 12:18:40.180366  680422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:18:40.189313  680422 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:18:40.203092  680422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 12:18:40.212195  680422 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 12:18:40.220088  680422 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 12:18:40.227489  680422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:18:40.347967  680422 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 12:18:40.547540  680422 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 12:18:40.547607  680422 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 12:18:40.551761  680422 start.go:564] Will wait 60s for crictl version
	I1213 12:18:40.551841  680422 ssh_runner.go:195] Run: which crictl
	I1213 12:18:40.555651  680422 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 12:18:40.579309  680422 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 12:18:40.579414  680422 ssh_runner.go:195] Run: crio --version
	I1213 12:18:40.606471  680422 ssh_runner.go:195] Run: crio --version
	I1213 12:18:40.639394  680422 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 12:18:40.642331  680422 cli_runner.go:164] Run: docker network inspect bridge-062409 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 12:18:40.658608  680422 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1213 12:18:40.662566  680422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:18:40.672131  680422 kubeadm.go:884] updating cluster {Name:bridge-062409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-062409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 12:18:40.672242  680422 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 12:18:40.672309  680422 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 12:18:40.707841  680422 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 12:18:40.707863  680422 crio.go:433] Images already preloaded, skipping extraction
	I1213 12:18:40.707920  680422 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 12:18:40.732099  680422 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 12:18:40.732121  680422 cache_images.go:86] Images are preloaded, skipping loading
	I1213 12:18:40.732129  680422 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1213 12:18:40.732255  680422 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=bridge-062409 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:bridge-062409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1213 12:18:40.732363  680422 ssh_runner.go:195] Run: crio config
	I1213 12:18:40.805379  680422 cni.go:84] Creating CNI manager for "bridge"
	I1213 12:18:40.805421  680422 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 12:18:40.805445  680422 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-062409 NodeName:bridge-062409 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 12:18:40.805596  680422 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-062409"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 12:18:40.805731  680422 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 12:18:40.814367  680422 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 12:18:40.814468  680422 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 12:18:40.822027  680422 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 12:18:40.834841  680422 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 12:18:40.847752  680422 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1213 12:18:40.860827  680422 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1213 12:18:40.865266  680422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 12:18:40.874853  680422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:18:40.989798  680422 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:18:41.006502  680422 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409 for IP: 192.168.76.2
	I1213 12:18:41.006569  680422 certs.go:195] generating shared ca certs ...
	I1213 12:18:41.006601  680422 certs.go:227] acquiring lock for ca certs: {Name:mk4fca88f7804266e31f4d3f3edee3e478f2cd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:18:41.006786  680422 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key
	I1213 12:18:41.006860  680422 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key
	I1213 12:18:41.006897  680422 certs.go:257] generating profile certs ...
	I1213 12:18:41.006975  680422 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/client.key
	I1213 12:18:41.007022  680422 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/client.crt with IP's: []
	I1213 12:18:41.440975  680422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/client.crt ...
	I1213 12:18:41.441010  680422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/client.crt: {Name:mk818b98cfd245c47376bda5bb25a80bc1adb822 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:18:41.441248  680422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/client.key ...
	I1213 12:18:41.441263  680422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/client.key: {Name:mk2fb8ea6a914eaa015da239a66dd874a4668983 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:18:41.441372  680422 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/apiserver.key.6131309c
	I1213 12:18:41.441391  680422 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/apiserver.crt.6131309c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1213 12:18:41.612596  680422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/apiserver.crt.6131309c ...
	I1213 12:18:41.612629  680422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/apiserver.crt.6131309c: {Name:mk466f3b1d2c5a6a0a9f5255d1fabcdccff6b778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:18:41.612815  680422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/apiserver.key.6131309c ...
	I1213 12:18:41.612831  680422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/apiserver.key.6131309c: {Name:mk675217857bdf6a42376e041d0312a04055873a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:18:41.612918  680422 certs.go:382] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/apiserver.crt.6131309c -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/apiserver.crt
	I1213 12:18:41.612994  680422 certs.go:386] copying /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/apiserver.key.6131309c -> /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/apiserver.key
	I1213 12:18:41.613062  680422 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/proxy-client.key
	I1213 12:18:41.613080  680422 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/proxy-client.crt with IP's: []
	I1213 12:18:42.287397  680422 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/proxy-client.crt ...
	I1213 12:18:42.287431  680422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/proxy-client.crt: {Name:mk422369d9a196e1487324edf54a25ab0964f62e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:18:42.287643  680422 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/proxy-client.key ...
	I1213 12:18:42.287662  680422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/proxy-client.key: {Name:mk16363bf330b0afb35883f05eed02a7a6d9362b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:18:42.287858  680422 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem (1338 bytes)
	W1213 12:18:42.287910  680422 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328_empty.pem, impossibly tiny 0 bytes
	I1213 12:18:42.287924  680422 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 12:18:42.287953  680422 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/ca.pem (1082 bytes)
	I1213 12:18:42.287985  680422 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/cert.pem (1123 bytes)
	I1213 12:18:42.288015  680422 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/certs/key.pem (1679 bytes)
	I1213 12:18:42.288064  680422 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem (1708 bytes)
	I1213 12:18:42.288704  680422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 12:18:42.309508  680422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 12:18:42.329617  680422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 12:18:42.347691  680422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 12:18:42.366039  680422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 12:18:42.383718  680422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 12:18:42.401055  680422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 12:18:42.419372  680422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/bridge-062409/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 12:18:42.437785  680422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/certs/356328.pem --> /usr/share/ca-certificates/356328.pem (1338 bytes)
	I1213 12:18:42.455779  680422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/ssl/certs/3563282.pem --> /usr/share/ca-certificates/3563282.pem (1708 bytes)
	I1213 12:18:42.473621  680422 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 12:18:42.491536  680422 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 12:18:42.504908  680422 ssh_runner.go:195] Run: openssl version
	I1213 12:18:42.511655  680422 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/356328.pem
	I1213 12:18:42.530770  680422 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/356328.pem /etc/ssl/certs/356328.pem
	I1213 12:18:42.543701  680422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356328.pem
	I1213 12:18:42.550767  680422 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:34 /usr/share/ca-certificates/356328.pem
	I1213 12:18:42.550836  680422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356328.pem
	I1213 12:18:42.608850  680422 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 12:18:42.618748  680422 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/356328.pem /etc/ssl/certs/51391683.0
	I1213 12:18:42.625953  680422 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3563282.pem
	I1213 12:18:42.633636  680422 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3563282.pem /etc/ssl/certs/3563282.pem
	I1213 12:18:42.641049  680422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3563282.pem
	I1213 12:18:42.644708  680422 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:34 /usr/share/ca-certificates/3563282.pem
	I1213 12:18:42.644770  680422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3563282.pem
	I1213 12:18:42.685656  680422 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 12:18:42.693240  680422 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3563282.pem /etc/ssl/certs/3ec20f2e.0
	I1213 12:18:42.700750  680422 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:18:42.708195  680422 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 12:18:42.715672  680422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:18:42.719646  680422 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:25 /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:18:42.719711  680422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 12:18:42.760959  680422 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 12:18:42.768400  680422 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 12:18:42.775730  680422 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 12:18:42.779381  680422 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 12:18:42.779472  680422 kubeadm.go:401] StartCluster: {Name:bridge-062409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-062409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 12:18:42.779644  680422 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 12:18:42.779707  680422 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 12:18:42.805945  680422 cri.go:89] found id: ""
	I1213 12:18:42.806101  680422 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 12:18:42.813806  680422 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 12:18:42.821507  680422 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 12:18:42.821580  680422 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 12:18:42.829264  680422 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 12:18:42.829301  680422 kubeadm.go:158] found existing configuration files:
	
	I1213 12:18:42.829377  680422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 12:18:42.837329  680422 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 12:18:42.837443  680422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 12:18:42.844930  680422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 12:18:42.852375  680422 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 12:18:42.852469  680422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 12:18:42.859801  680422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 12:18:42.867608  680422 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 12:18:42.867671  680422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 12:18:42.874997  680422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 12:18:42.882708  680422 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 12:18:42.882775  680422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 12:18:42.890562  680422 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 12:18:42.931027  680422 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 12:18:42.931092  680422 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 12:18:42.958411  680422 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 12:18:42.958518  680422 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1213 12:18:42.958567  680422 kubeadm.go:319] OS: Linux
	I1213 12:18:42.958619  680422 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 12:18:42.958681  680422 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1213 12:18:42.958743  680422 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 12:18:42.958806  680422 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 12:18:42.958870  680422 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 12:18:42.958934  680422 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 12:18:42.958996  680422 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 12:18:42.959082  680422 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 12:18:42.959148  680422 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1213 12:18:43.026124  680422 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 12:18:43.026251  680422 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 12:18:43.026348  680422 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 12:18:43.036692  680422 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 12:18:43.043257  680422 out.go:252]   - Generating certificates and keys ...
	I1213 12:18:43.043378  680422 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 12:18:43.043465  680422 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 12:18:43.774647  680422 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 12:18:44.076672  680422 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 12:18:44.214535  680422 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 12:18:44.613357  680422 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 12:18:45.308411  680422 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 12:18:45.308752  680422 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [bridge-062409 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 12:18:45.574824  680422 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 12:18:45.575175  680422 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [bridge-062409 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1213 12:18:46.196334  680422 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 12:18:47.279148  680422 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 12:18:47.813778  680422 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 12:18:47.814074  680422 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 12:18:47.957413  680422 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 12:18:48.066580  680422 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 12:18:48.569121  680422 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 12:18:49.155396  680422 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 12:18:49.399610  680422 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 12:18:49.400279  680422 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 12:18:49.405234  680422 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 12:18:49.408696  680422 out.go:252]   - Booting up control plane ...
	I1213 12:18:49.408798  680422 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 12:18:49.408877  680422 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 12:18:49.408944  680422 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 12:18:49.423377  680422 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 12:18:49.423716  680422 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 12:18:49.431426  680422 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 12:18:49.431843  680422 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 12:18:49.432093  680422 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 12:18:49.572772  680422 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 12:18:49.572907  680422 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 12:18:51.074335  680422 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501837701s
	I1213 12:18:51.077799  680422 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 12:18:51.077894  680422 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1213 12:18:51.077984  680422 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 12:18:51.078063  680422 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 12:18:53.381268  680422 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.302921218s
	I1213 12:18:55.504840  680422 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.426917082s
	I1213 12:18:57.581296  680422 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.503260992s
	I1213 12:18:57.613730  680422 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 12:18:57.629043  680422 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 12:18:57.643873  680422 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 12:18:57.644080  680422 kubeadm.go:319] [mark-control-plane] Marking the node bridge-062409 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 12:18:57.657337  680422 kubeadm.go:319] [bootstrap-token] Using token: mytrzy.4ip34qlcclsdozrk
	I1213 12:18:57.660258  680422 out.go:252]   - Configuring RBAC rules ...
	I1213 12:18:57.660398  680422 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 12:18:57.665686  680422 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 12:18:57.674873  680422 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 12:18:57.681347  680422 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 12:18:57.685716  680422 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 12:18:57.689863  680422 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 12:18:57.988791  680422 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 12:18:58.421772  680422 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 12:18:58.987684  680422 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 12:18:58.988716  680422 kubeadm.go:319] 
	I1213 12:18:58.988792  680422 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 12:18:58.988801  680422 kubeadm.go:319] 
	I1213 12:18:58.988879  680422 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 12:18:58.988887  680422 kubeadm.go:319] 
	I1213 12:18:58.988913  680422 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 12:18:58.988975  680422 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 12:18:58.989028  680422 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 12:18:58.989036  680422 kubeadm.go:319] 
	I1213 12:18:58.989096  680422 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 12:18:58.989103  680422 kubeadm.go:319] 
	I1213 12:18:58.989151  680422 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 12:18:58.989155  680422 kubeadm.go:319] 
	I1213 12:18:58.989207  680422 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 12:18:58.989281  680422 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 12:18:58.989351  680422 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 12:18:58.989356  680422 kubeadm.go:319] 
	I1213 12:18:58.989454  680422 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 12:18:58.989532  680422 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 12:18:58.989536  680422 kubeadm.go:319] 
	I1213 12:18:58.989640  680422 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token mytrzy.4ip34qlcclsdozrk \
	I1213 12:18:58.989745  680422 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a3798e8f4868c7e4585b4327b4f0565e5125112465fbf26ae2f7c9b7fec5e169 \
	I1213 12:18:58.989766  680422 kubeadm.go:319] 	--control-plane 
	I1213 12:18:58.989769  680422 kubeadm.go:319] 
	I1213 12:18:58.989854  680422 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 12:18:58.989858  680422 kubeadm.go:319] 
	I1213 12:18:58.989961  680422 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token mytrzy.4ip34qlcclsdozrk \
	I1213 12:18:58.990319  680422 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a3798e8f4868c7e4585b4327b4f0565e5125112465fbf26ae2f7c9b7fec5e169 
	I1213 12:18:58.994704  680422 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1213 12:18:58.994941  680422 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1213 12:18:58.995051  680422 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 12:18:58.995077  680422 cni.go:84] Creating CNI manager for "bridge"
	I1213 12:18:58.998298  680422 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1213 12:18:59.002849  680422 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1213 12:18:59.011094  680422 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1213 12:18:59.028781  680422 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 12:18:59.028961  680422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:18:59.029091  680422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-062409 minikube.k8s.io/updated_at=2025_12_13T12_18_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=625889e93b3a3d0ab16814abcc3b4c90fb83309b minikube.k8s.io/name=bridge-062409 minikube.k8s.io/primary=true
	I1213 12:18:59.231437  680422 ops.go:34] apiserver oom_adj: -16
	I1213 12:18:59.231624  680422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:18:59.732211  680422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:19:00.232589  680422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:19:00.731680  680422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:19:01.231736  680422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:19:01.732174  680422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:19:02.232313  680422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:19:02.732630  680422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 12:19:02.849161  680422 kubeadm.go:1114] duration metric: took 3.820270549s to wait for elevateKubeSystemPrivileges
	I1213 12:19:02.849198  680422 kubeadm.go:403] duration metric: took 20.069730527s to StartCluster
	I1213 12:19:02.849228  680422 settings.go:142] acquiring lock: {Name:mkfde2b1cddc54ba68217c9e1af762eb1bb22d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:19:02.849306  680422 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 12:19:02.850335  680422 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/kubeconfig: {Name:mkc00a0312b68910dd502dbc8ca00cacd09b8c4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 12:19:02.850670  680422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 12:19:02.850671  680422 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 12:19:02.850955  680422 config.go:182] Loaded profile config "bridge-062409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 12:19:02.851001  680422 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 12:19:02.851073  680422 addons.go:70] Setting storage-provisioner=true in profile "bridge-062409"
	I1213 12:19:02.851088  680422 addons.go:239] Setting addon storage-provisioner=true in "bridge-062409"
	I1213 12:19:02.851118  680422 host.go:66] Checking if "bridge-062409" exists ...
	I1213 12:19:02.851641  680422 cli_runner.go:164] Run: docker container inspect bridge-062409 --format={{.State.Status}}
	I1213 12:19:02.852067  680422 addons.go:70] Setting default-storageclass=true in profile "bridge-062409"
	I1213 12:19:02.852095  680422 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "bridge-062409"
	I1213 12:19:02.852367  680422 cli_runner.go:164] Run: docker container inspect bridge-062409 --format={{.State.Status}}
	I1213 12:19:02.855982  680422 out.go:179] * Verifying Kubernetes components...
	I1213 12:19:02.859080  680422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 12:19:02.884288  680422 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 12:19:02.892329  680422 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:19:02.892354  680422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 12:19:02.892428  680422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-062409
	I1213 12:19:02.902720  680422 addons.go:239] Setting addon default-storageclass=true in "bridge-062409"
	I1213 12:19:02.902762  680422 host.go:66] Checking if "bridge-062409" exists ...
	I1213 12:19:02.903197  680422 cli_runner.go:164] Run: docker container inspect bridge-062409 --format={{.State.Status}}
	I1213 12:19:02.944018  680422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/bridge-062409/id_rsa Username:docker}
	I1213 12:19:02.952603  680422 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 12:19:02.952624  680422 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 12:19:02.952685  680422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-062409
	I1213 12:19:02.987133  680422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/bridge-062409/id_rsa Username:docker}
	I1213 12:19:03.152941  680422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 12:19:03.173776  680422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 12:19:03.183327  680422 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 12:19:03.202716  680422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 12:19:03.860952  680422 node_ready.go:35] waiting up to 15m0s for node "bridge-062409" to be "Ready" ...
	I1213 12:19:03.862375  680422 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1213 12:19:03.887914  680422 node_ready.go:49] node "bridge-062409" is "Ready"
	I1213 12:19:03.887939  680422 node_ready.go:38] duration metric: took 26.959997ms for node "bridge-062409" to be "Ready" ...
	I1213 12:19:03.887952  680422 api_server.go:52] waiting for apiserver process to appear ...
	I1213 12:19:03.888023  680422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 12:19:04.086852  680422 api_server.go:72] duration metric: took 1.236099884s to wait for apiserver process to appear ...
	I1213 12:19:04.086915  680422 api_server.go:88] waiting for apiserver healthz status ...
	I1213 12:19:04.086964  680422 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1213 12:19:04.090058  680422 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1213 12:19:04.093726  680422 addons.go:530] duration metric: took 1.242706346s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1213 12:19:04.102379  680422 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1213 12:19:04.103719  680422 api_server.go:141] control plane version: v1.34.2
	I1213 12:19:04.103748  680422 api_server.go:131] duration metric: took 16.795497ms to wait for apiserver health ...
	I1213 12:19:04.103758  680422 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 12:19:04.108447  680422 system_pods.go:59] 8 kube-system pods found
	I1213 12:19:04.108532  680422 system_pods.go:61] "coredns-66bc5c9577-pcc9s" [788d0d95-b429-49d5-ac64-8dded3885bc1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:19:04.108570  680422 system_pods.go:61] "coredns-66bc5c9577-whs52" [a66d116e-76d3-46d9-ac37-f8db0c6aabc8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:19:04.108596  680422 system_pods.go:61] "etcd-bridge-062409" [5a727692-9ab9-415e-bb79-56ba70a20d30] Running
	I1213 12:19:04.108617  680422 system_pods.go:61] "kube-apiserver-bridge-062409" [ba445720-68ca-431c-ab76-bbf985200486] Running
	I1213 12:19:04.108649  680422 system_pods.go:61] "kube-controller-manager-bridge-062409" [f44c3b2e-4410-4747-8787-e1394603bcde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 12:19:04.108670  680422 system_pods.go:61] "kube-proxy-tjxlm" [8ace75ad-9c3f-452b-8992-8f97b7edcfed] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 12:19:04.108692  680422 system_pods.go:61] "kube-scheduler-bridge-062409" [104d8c42-ae3c-46da-9db2-8bcde87e2bd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 12:19:04.108713  680422 system_pods.go:61] "storage-provisioner" [c2c0ddf6-ab60-4c55-b648-2cfb2d3ab749] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 12:19:04.108738  680422 system_pods.go:74] duration metric: took 4.970183ms to wait for pod list to return data ...
	I1213 12:19:04.108758  680422 default_sa.go:34] waiting for default service account to be created ...
	I1213 12:19:04.111461  680422 default_sa.go:45] found service account: "default"
	I1213 12:19:04.111573  680422 default_sa.go:55] duration metric: took 2.792699ms for default service account to be created ...
	I1213 12:19:04.111591  680422 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 12:19:04.114847  680422 system_pods.go:86] 8 kube-system pods found
	I1213 12:19:04.114884  680422 system_pods.go:89] "coredns-66bc5c9577-pcc9s" [788d0d95-b429-49d5-ac64-8dded3885bc1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:19:04.114893  680422 system_pods.go:89] "coredns-66bc5c9577-whs52" [a66d116e-76d3-46d9-ac37-f8db0c6aabc8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:19:04.114898  680422 system_pods.go:89] "etcd-bridge-062409" [5a727692-9ab9-415e-bb79-56ba70a20d30] Running
	I1213 12:19:04.114905  680422 system_pods.go:89] "kube-apiserver-bridge-062409" [ba445720-68ca-431c-ab76-bbf985200486] Running
	I1213 12:19:04.114911  680422 system_pods.go:89] "kube-controller-manager-bridge-062409" [f44c3b2e-4410-4747-8787-e1394603bcde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 12:19:04.114918  680422 system_pods.go:89] "kube-proxy-tjxlm" [8ace75ad-9c3f-452b-8992-8f97b7edcfed] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 12:19:04.114925  680422 system_pods.go:89] "kube-scheduler-bridge-062409" [104d8c42-ae3c-46da-9db2-8bcde87e2bd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 12:19:04.114932  680422 system_pods.go:89] "storage-provisioner" [c2c0ddf6-ab60-4c55-b648-2cfb2d3ab749] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 12:19:04.114962  680422 retry.go:31] will retry after 282.706534ms: missing components: kube-dns, kube-proxy
	I1213 12:19:04.366958  680422 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-062409" context rescaled to 1 replicas
	I1213 12:19:04.401423  680422 system_pods.go:86] 8 kube-system pods found
	I1213 12:19:04.401478  680422 system_pods.go:89] "coredns-66bc5c9577-pcc9s" [788d0d95-b429-49d5-ac64-8dded3885bc1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:19:04.401488  680422 system_pods.go:89] "coredns-66bc5c9577-whs52" [a66d116e-76d3-46d9-ac37-f8db0c6aabc8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:19:04.401527  680422 system_pods.go:89] "etcd-bridge-062409" [5a727692-9ab9-415e-bb79-56ba70a20d30] Running
	I1213 12:19:04.401540  680422 system_pods.go:89] "kube-apiserver-bridge-062409" [ba445720-68ca-431c-ab76-bbf985200486] Running
	I1213 12:19:04.401547  680422 system_pods.go:89] "kube-controller-manager-bridge-062409" [f44c3b2e-4410-4747-8787-e1394603bcde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 12:19:04.401553  680422 system_pods.go:89] "kube-proxy-tjxlm" [8ace75ad-9c3f-452b-8992-8f97b7edcfed] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 12:19:04.401565  680422 system_pods.go:89] "kube-scheduler-bridge-062409" [104d8c42-ae3c-46da-9db2-8bcde87e2bd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 12:19:04.401571  680422 system_pods.go:89] "storage-provisioner" [c2c0ddf6-ab60-4c55-b648-2cfb2d3ab749] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 12:19:04.401597  680422 retry.go:31] will retry after 346.150405ms: missing components: kube-dns, kube-proxy
	I1213 12:19:04.752884  680422 system_pods.go:86] 8 kube-system pods found
	I1213 12:19:04.752981  680422 system_pods.go:89] "coredns-66bc5c9577-pcc9s" [788d0d95-b429-49d5-ac64-8dded3885bc1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:19:04.753007  680422 system_pods.go:89] "coredns-66bc5c9577-whs52" [a66d116e-76d3-46d9-ac37-f8db0c6aabc8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:19:04.753039  680422 system_pods.go:89] "etcd-bridge-062409" [5a727692-9ab9-415e-bb79-56ba70a20d30] Running
	I1213 12:19:04.753066  680422 system_pods.go:89] "kube-apiserver-bridge-062409" [ba445720-68ca-431c-ab76-bbf985200486] Running
	I1213 12:19:04.753098  680422 system_pods.go:89] "kube-controller-manager-bridge-062409" [f44c3b2e-4410-4747-8787-e1394603bcde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 12:19:04.753124  680422 system_pods.go:89] "kube-proxy-tjxlm" [8ace75ad-9c3f-452b-8992-8f97b7edcfed] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 12:19:04.753164  680422 system_pods.go:89] "kube-scheduler-bridge-062409" [104d8c42-ae3c-46da-9db2-8bcde87e2bd7] Running
	I1213 12:19:04.753185  680422 system_pods.go:89] "storage-provisioner" [c2c0ddf6-ab60-4c55-b648-2cfb2d3ab749] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 12:19:04.753214  680422 retry.go:31] will retry after 311.489132ms: missing components: kube-dns, kube-proxy
	I1213 12:19:05.078536  680422 system_pods.go:86] 8 kube-system pods found
	I1213 12:19:05.078618  680422 system_pods.go:89] "coredns-66bc5c9577-pcc9s" [788d0d95-b429-49d5-ac64-8dded3885bc1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:19:05.078642  680422 system_pods.go:89] "coredns-66bc5c9577-whs52" [a66d116e-76d3-46d9-ac37-f8db0c6aabc8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:19:05.078695  680422 system_pods.go:89] "etcd-bridge-062409" [5a727692-9ab9-415e-bb79-56ba70a20d30] Running
	I1213 12:19:05.078725  680422 system_pods.go:89] "kube-apiserver-bridge-062409" [ba445720-68ca-431c-ab76-bbf985200486] Running
	I1213 12:19:05.078751  680422 system_pods.go:89] "kube-controller-manager-bridge-062409" [f44c3b2e-4410-4747-8787-e1394603bcde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 12:19:05.078776  680422 system_pods.go:89] "kube-proxy-tjxlm" [8ace75ad-9c3f-452b-8992-8f97b7edcfed] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 12:19:05.078810  680422 system_pods.go:89] "kube-scheduler-bridge-062409" [104d8c42-ae3c-46da-9db2-8bcde87e2bd7] Running
	I1213 12:19:05.078840  680422 system_pods.go:89] "storage-provisioner" [c2c0ddf6-ab60-4c55-b648-2cfb2d3ab749] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 12:19:05.078875  680422 retry.go:31] will retry after 572.99021ms: missing components: kube-dns, kube-proxy
	I1213 12:19:05.656274  680422 system_pods.go:86] 7 kube-system pods found
	I1213 12:19:05.656312  680422 system_pods.go:89] "coredns-66bc5c9577-whs52" [a66d116e-76d3-46d9-ac37-f8db0c6aabc8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:19:05.656320  680422 system_pods.go:89] "etcd-bridge-062409" [5a727692-9ab9-415e-bb79-56ba70a20d30] Running
	I1213 12:19:05.656325  680422 system_pods.go:89] "kube-apiserver-bridge-062409" [ba445720-68ca-431c-ab76-bbf985200486] Running
	I1213 12:19:05.656332  680422 system_pods.go:89] "kube-controller-manager-bridge-062409" [f44c3b2e-4410-4747-8787-e1394603bcde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 12:19:05.656338  680422 system_pods.go:89] "kube-proxy-tjxlm" [8ace75ad-9c3f-452b-8992-8f97b7edcfed] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 12:19:05.656342  680422 system_pods.go:89] "kube-scheduler-bridge-062409" [104d8c42-ae3c-46da-9db2-8bcde87e2bd7] Running
	I1213 12:19:05.656346  680422 system_pods.go:89] "storage-provisioner" [c2c0ddf6-ab60-4c55-b648-2cfb2d3ab749] Running
	I1213 12:19:05.656361  680422 retry.go:31] will retry after 732.541957ms: missing components: kube-proxy
	I1213 12:19:06.392484  680422 system_pods.go:86] 7 kube-system pods found
	I1213 12:19:06.392525  680422 system_pods.go:89] "coredns-66bc5c9577-whs52" [a66d116e-76d3-46d9-ac37-f8db0c6aabc8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:19:06.392534  680422 system_pods.go:89] "etcd-bridge-062409" [5a727692-9ab9-415e-bb79-56ba70a20d30] Running
	I1213 12:19:06.392541  680422 system_pods.go:89] "kube-apiserver-bridge-062409" [ba445720-68ca-431c-ab76-bbf985200486] Running
	I1213 12:19:06.392549  680422 system_pods.go:89] "kube-controller-manager-bridge-062409" [f44c3b2e-4410-4747-8787-e1394603bcde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 12:19:06.392557  680422 system_pods.go:89] "kube-proxy-tjxlm" [8ace75ad-9c3f-452b-8992-8f97b7edcfed] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 12:19:06.392567  680422 system_pods.go:89] "kube-scheduler-bridge-062409" [104d8c42-ae3c-46da-9db2-8bcde87e2bd7] Running
	I1213 12:19:06.392572  680422 system_pods.go:89] "storage-provisioner" [c2c0ddf6-ab60-4c55-b648-2cfb2d3ab749] Running
	I1213 12:19:06.392586  680422 retry.go:31] will retry after 837.396164ms: missing components: kube-proxy
	I1213 12:19:07.233814  680422 system_pods.go:86] 7 kube-system pods found
	I1213 12:19:07.233850  680422 system_pods.go:89] "coredns-66bc5c9577-whs52" [a66d116e-76d3-46d9-ac37-f8db0c6aabc8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 12:19:07.233860  680422 system_pods.go:89] "etcd-bridge-062409" [5a727692-9ab9-415e-bb79-56ba70a20d30] Running
	I1213 12:19:07.233866  680422 system_pods.go:89] "kube-apiserver-bridge-062409" [ba445720-68ca-431c-ab76-bbf985200486] Running
	I1213 12:19:07.233878  680422 system_pods.go:89] "kube-controller-manager-bridge-062409" [f44c3b2e-4410-4747-8787-e1394603bcde] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 12:19:07.233883  680422 system_pods.go:89] "kube-proxy-tjxlm" [8ace75ad-9c3f-452b-8992-8f97b7edcfed] Running
	I1213 12:19:07.233888  680422 system_pods.go:89] "kube-scheduler-bridge-062409" [104d8c42-ae3c-46da-9db2-8bcde87e2bd7] Running
	I1213 12:19:07.233893  680422 system_pods.go:89] "storage-provisioner" [c2c0ddf6-ab60-4c55-b648-2cfb2d3ab749] Running
	I1213 12:19:07.233901  680422 system_pods.go:126] duration metric: took 3.122303099s to wait for k8s-apps to be running ...
	I1213 12:19:07.233914  680422 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 12:19:07.233970  680422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 12:19:07.246642  680422 system_svc.go:56] duration metric: took 12.717925ms WaitForService to wait for kubelet
	I1213 12:19:07.246668  680422 kubeadm.go:587] duration metric: took 4.395929833s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 12:19:07.246687  680422 node_conditions.go:102] verifying NodePressure condition ...
	I1213 12:19:07.249513  680422 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 12:19:07.249543  680422 node_conditions.go:123] node cpu capacity is 2
	I1213 12:19:07.249559  680422 node_conditions.go:105] duration metric: took 2.867695ms to run NodePressure ...
	I1213 12:19:07.249571  680422 start.go:242] waiting for startup goroutines ...
	I1213 12:19:07.249579  680422 start.go:247] waiting for cluster config update ...
	I1213 12:19:07.249595  680422 start.go:256] writing updated cluster config ...
	I1213 12:19:07.249876  680422 ssh_runner.go:195] Run: rm -f paused
	I1213 12:19:07.253315  680422 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 12:19:07.257197  680422 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-whs52" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 12:19:09.262070  680422 pod_ready.go:104] pod "coredns-66bc5c9577-whs52" is not "Ready", error: <nil>
	W1213 12:19:11.262232  680422 pod_ready.go:104] pod "coredns-66bc5c9577-whs52" is not "Ready", error: <nil>
	W1213 12:19:13.762923  680422 pod_ready.go:104] pod "coredns-66bc5c9577-whs52" is not "Ready", error: <nil>
	W1213 12:19:16.262660  680422 pod_ready.go:104] pod "coredns-66bc5c9577-whs52" is not "Ready", error: <nil>
	W1213 12:19:18.263395  680422 pod_ready.go:104] pod "coredns-66bc5c9577-whs52" is not "Ready", error: <nil>
	W1213 12:19:20.762688  680422 pod_ready.go:104] pod "coredns-66bc5c9577-whs52" is not "Ready", error: <nil>
	W1213 12:19:23.262130  680422 pod_ready.go:104] pod "coredns-66bc5c9577-whs52" is not "Ready", error: <nil>
	W1213 12:19:25.262238  680422 pod_ready.go:104] pod "coredns-66bc5c9577-whs52" is not "Ready", error: <nil>
	W1213 12:19:27.263001  680422 pod_ready.go:104] pod "coredns-66bc5c9577-whs52" is not "Ready", error: <nil>
	W1213 12:19:29.761798  680422 pod_ready.go:104] pod "coredns-66bc5c9577-whs52" is not "Ready", error: <nil>
	W1213 12:19:31.761996  680422 pod_ready.go:104] pod "coredns-66bc5c9577-whs52" is not "Ready", error: <nil>
	W1213 12:19:33.762422  680422 pod_ready.go:104] pod "coredns-66bc5c9577-whs52" is not "Ready", error: <nil>
	W1213 12:19:36.262259  680422 pod_ready.go:104] pod "coredns-66bc5c9577-whs52" is not "Ready", error: <nil>
	W1213 12:19:38.763703  680422 pod_ready.go:104] pod "coredns-66bc5c9577-whs52" is not "Ready", error: <nil>
	W1213 12:19:41.263031  680422 pod_ready.go:104] pod "coredns-66bc5c9577-whs52" is not "Ready", error: <nil>
	W1213 12:19:43.763204  680422 pod_ready.go:104] pod "coredns-66bc5c9577-whs52" is not "Ready", error: <nil>
	I1213 12:19:44.262676  680422 pod_ready.go:94] pod "coredns-66bc5c9577-whs52" is "Ready"
	I1213 12:19:44.262703  680422 pod_ready.go:86] duration metric: took 37.005480787s for pod "coredns-66bc5c9577-whs52" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:19:44.265635  680422 pod_ready.go:83] waiting for pod "etcd-bridge-062409" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:19:44.271771  680422 pod_ready.go:94] pod "etcd-bridge-062409" is "Ready"
	I1213 12:19:44.271804  680422 pod_ready.go:86] duration metric: took 6.137995ms for pod "etcd-bridge-062409" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:19:44.274356  680422 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-062409" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:19:44.278779  680422 pod_ready.go:94] pod "kube-apiserver-bridge-062409" is "Ready"
	I1213 12:19:44.278805  680422 pod_ready.go:86] duration metric: took 4.42274ms for pod "kube-apiserver-bridge-062409" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:19:44.281088  680422 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-062409" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:19:44.465435  680422 pod_ready.go:94] pod "kube-controller-manager-bridge-062409" is "Ready"
	I1213 12:19:44.465461  680422 pod_ready.go:86] duration metric: took 184.344755ms for pod "kube-controller-manager-bridge-062409" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:19:44.661559  680422 pod_ready.go:83] waiting for pod "kube-proxy-tjxlm" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:19:45.062143  680422 pod_ready.go:94] pod "kube-proxy-tjxlm" is "Ready"
	I1213 12:19:45.062183  680422 pod_ready.go:86] duration metric: took 400.597938ms for pod "kube-proxy-tjxlm" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:19:45.262477  680422 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-062409" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:19:45.660484  680422 pod_ready.go:94] pod "kube-scheduler-bridge-062409" is "Ready"
	I1213 12:19:45.660515  680422 pod_ready.go:86] duration metric: took 398.009014ms for pod "kube-scheduler-bridge-062409" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 12:19:45.660527  680422 pod_ready.go:40] duration metric: took 38.407178053s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 12:19:45.719598  680422 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1213 12:19:45.722870  680422 out.go:179] * Done! kubectl is now configured to use "bridge-062409" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.522775611Z" level=info msg="Using the internal default seccomp profile"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.522787123Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.522793498Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.522799176Z" level=info msg="RDT not available in the host system"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.522824374Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.523715533Z" level=info msg="Conmon does support the --sync option"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.523753753Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.523772756Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.524439847Z" level=info msg="Conmon does support the --sync option"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.524461181Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.52461968Z" level=info msg="Updated default CNI network name to "
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.525403671Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci
/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_m
emory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_di
r = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [cr
io.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.529256513Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.529355665Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576580003Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576617025Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576659233Z" level=info msg="Create NRI interface"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576753674Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576762569Z" level=info msg="runtime interface created"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576773646Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576779767Z" level=info msg="runtime interface starting up..."
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576785831Z" level=info msg="starting plugins..."
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576798393Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 12:03:09 no-preload-307409 crio[616]: time="2025-12-13T12:03:09.576863575Z" level=info msg="No systemd watchdog enabled"
	Dec 13 12:03:09 no-preload-307409 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1213 12:22:23.391062   10139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:22:23.391548   10139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:22:23.393379   10139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:22:23.393834   10139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1213 12:22:23.395482   10139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +43.810032] overlayfs: idmapped layers are currently not supported
	[Dec13 11:27] overlayfs: idmapped layers are currently not supported
	[Dec13 11:28] overlayfs: idmapped layers are currently not supported
	[Dec13 11:29] overlayfs: idmapped layers are currently not supported
	[Dec13 11:31] overlayfs: idmapped layers are currently not supported
	[Dec13 11:33] overlayfs: idmapped layers are currently not supported
	[Dec13 11:43] overlayfs: idmapped layers are currently not supported
	[Dec13 11:45] overlayfs: idmapped layers are currently not supported
	[Dec13 11:46] overlayfs: idmapped layers are currently not supported
	[ +24.639766] overlayfs: idmapped layers are currently not supported
	[ +18.732422] overlayfs: idmapped layers are currently not supported
	[Dec13 11:47] overlayfs: idmapped layers are currently not supported
	[Dec13 11:48] overlayfs: idmapped layers are currently not supported
	[Dec13 11:50] overlayfs: idmapped layers are currently not supported
	[ +24.618483] overlayfs: idmapped layers are currently not supported
	[Dec13 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.749488] overlayfs: idmapped layers are currently not supported
	[Dec13 11:52] overlayfs: idmapped layers are currently not supported
	[Dec13 12:09] overlayfs: idmapped layers are currently not supported
	[Dec13 12:11] overlayfs: idmapped layers are currently not supported
	[Dec13 12:12] overlayfs: idmapped layers are currently not supported
	[Dec13 12:14] overlayfs: idmapped layers are currently not supported
	[Dec13 12:15] overlayfs: idmapped layers are currently not supported
	[Dec13 12:17] overlayfs: idmapped layers are currently not supported
	[Dec13 12:18] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 12:22:23 up  4:04,  0 user,  load average: 0.18, 0.91, 1.22
	Linux no-preload-307409 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 13 12:22:20 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:22:21 no-preload-307409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1533.
	Dec 13 12:22:21 no-preload-307409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:22:21 no-preload-307409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:22:21 no-preload-307409 kubelet[10004]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:22:21 no-preload-307409 kubelet[10004]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:22:21 no-preload-307409 kubelet[10004]: E1213 12:22:21.566939   10004 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:22:21 no-preload-307409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:22:21 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:22:22 no-preload-307409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1534.
	Dec 13 12:22:22 no-preload-307409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:22:22 no-preload-307409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:22:22 no-preload-307409 kubelet[10011]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:22:22 no-preload-307409 kubelet[10011]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:22:22 no-preload-307409 kubelet[10011]: E1213 12:22:22.322943   10011 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:22:22 no-preload-307409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:22:22 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 13 12:22:23 no-preload-307409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1535.
	Dec 13 12:22:23 no-preload-307409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:22:23 no-preload-307409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 12:22:23 no-preload-307409 kubelet[10051]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:22:23 no-preload-307409 kubelet[10051]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 13 12:22:23 no-preload-307409 kubelet[10051]: E1213 12:22:23.096491   10051 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 13 12:22:23 no-preload-307409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 13 12:22:23 no-preload-307409 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-307409 -n no-preload-307409
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-307409 -n no-preload-307409: exit status 2 (348.314293ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-307409" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (242.94s)

                                                
                                    

Test pass (318/412)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.56
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 3.23
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.09
18 TestDownloadOnly/v1.34.2/DeleteAll 0.21
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.45
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.63
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 132.76
40 TestAddons/serial/GCPAuth/Namespaces 0.19
41 TestAddons/serial/GCPAuth/FakeCredentials 10.84
57 TestAddons/StoppedEnableDisable 12.43
58 TestCertOptions 41.97
59 TestCertExpiration 244.86
61 TestForceSystemdFlag 38.02
62 TestForceSystemdEnv 37.98
67 TestErrorSpam/setup 32.43
68 TestErrorSpam/start 0.82
69 TestErrorSpam/status 1.13
70 TestErrorSpam/pause 7.11
71 TestErrorSpam/unpause 5.91
72 TestErrorSpam/stop 1.51
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 53.85
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 28.9
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.1
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.51
84 TestFunctional/serial/CacheCmd/cache/add_local 1.2
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.84
89 TestFunctional/serial/CacheCmd/cache/delete 0.11
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 39.78
93 TestFunctional/serial/ComponentHealth 0.09
94 TestFunctional/serial/LogsCmd 1.44
95 TestFunctional/serial/LogsFileCmd 1.46
96 TestFunctional/serial/InvalidService 4.39
98 TestFunctional/parallel/ConfigCmd 0.49
99 TestFunctional/parallel/DashboardCmd 14.44
100 TestFunctional/parallel/DryRun 0.43
101 TestFunctional/parallel/InternationalLanguage 0.2
102 TestFunctional/parallel/StatusCmd 1.26
106 TestFunctional/parallel/ServiceCmdConnect 7.62
107 TestFunctional/parallel/AddonsCmd 0.14
108 TestFunctional/parallel/PersistentVolumeClaim 19.66
110 TestFunctional/parallel/SSHCmd 0.58
111 TestFunctional/parallel/CpCmd 2.06
113 TestFunctional/parallel/FileSync 0.36
114 TestFunctional/parallel/CertSync 2.15
118 TestFunctional/parallel/NodeLabels 0.11
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.78
122 TestFunctional/parallel/License 0.31
123 TestFunctional/parallel/Version/short 0.07
124 TestFunctional/parallel/Version/components 1.06
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.58
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
129 TestFunctional/parallel/ImageCommands/ImageBuild 4.11
130 TestFunctional/parallel/ImageCommands/Setup 0.7
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.52
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.02
136 TestFunctional/parallel/ServiceCmd/DeployApp 8.25
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.17
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.42
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.68
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.42
143 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
144 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.31
147 TestFunctional/parallel/ServiceCmd/List 0.34
148 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
149 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
150 TestFunctional/parallel/ServiceCmd/Format 0.36
151 TestFunctional/parallel/ServiceCmd/URL 0.4
152 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
153 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
157 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
158 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
159 TestFunctional/parallel/ProfileCmd/profile_list 0.46
160 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
161 TestFunctional/parallel/MountCmd/any-port 8.3
162 TestFunctional/parallel/MountCmd/specific-port 2.55
163 TestFunctional/parallel/MountCmd/VerifyCleanup 2.68
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.5
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.07
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.05
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.05
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.32
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.81
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.97
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 0.96
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.44
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.46
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.19
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.15
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.67
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.25
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.28
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.73
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.56
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.3
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.43
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.37
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.41
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.67
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 2.2
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.05
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.49
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.23
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.23
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.24
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.23
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.78
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.27
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.21
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.82
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.08
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.38
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.54
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.75
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.41
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.15
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.14
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 152.57
265 TestMultiControlPlane/serial/DeployApp 7.47
266 TestMultiControlPlane/serial/PingHostFromPods 1.56
267 TestMultiControlPlane/serial/AddWorkerNode 35.75
268 TestMultiControlPlane/serial/NodeLabels 0.12
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.07
270 TestMultiControlPlane/serial/CopyFile 19.99
271 TestMultiControlPlane/serial/StopSecondaryNode 12.91
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.84
273 TestMultiControlPlane/serial/RestartSecondaryNode 30.35
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.24
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 128.16
276 TestMultiControlPlane/serial/DeleteSecondaryNode 12.45
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.83
278 TestMultiControlPlane/serial/StopCluster 36.09
279 TestMultiControlPlane/serial/RestartCluster 68.78
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
281 TestMultiControlPlane/serial/AddSecondaryNode 52.99
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.05
287 TestJSONOutput/start/Command 50.94
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.85
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.24
312 TestKicCustomNetwork/create_custom_network 40.23
313 TestKicCustomNetwork/use_default_bridge_network 37.1
314 TestKicExistingNetwork 32.98
315 TestKicCustomSubnet 37.78
316 TestKicStaticIP 35.88
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 75.76
321 TestMountStart/serial/StartWithMountFirst 8.92
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 8.71
324 TestMountStart/serial/VerifyMountSecond 0.29
325 TestMountStart/serial/DeleteFirst 1.72
326 TestMountStart/serial/VerifyMountPostDelete 0.27
327 TestMountStart/serial/Stop 1.29
328 TestMountStart/serial/RestartStopped 8.01
329 TestMountStart/serial/VerifyMountPostStop 0.28
332 TestMultiNode/serial/FreshStart2Nodes 81.26
333 TestMultiNode/serial/DeployApp2Nodes 4.98
334 TestMultiNode/serial/PingHostFrom2Pods 0.95
335 TestMultiNode/serial/AddNode 29.33
336 TestMultiNode/serial/MultiNodeLabels 0.08
337 TestMultiNode/serial/ProfileList 0.71
338 TestMultiNode/serial/CopyFile 10.63
339 TestMultiNode/serial/StopNode 2.64
340 TestMultiNode/serial/StartAfterStop 8.33
341 TestMultiNode/serial/RestartKeepsNodes 73.76
342 TestMultiNode/serial/DeleteNode 5.65
343 TestMultiNode/serial/StopMultiNode 23.98
344 TestMultiNode/serial/RestartMultiNode 52.77
345 TestMultiNode/serial/ValidateNameConflict 37.66
350 TestPreload 122.52
352 TestScheduledStopUnix 103.75
355 TestInsufficientStorage 12.84
356 TestRunningBinaryUpgrade 301.43
359 TestMissingContainerUpgrade 110.63
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
362 TestNoKubernetes/serial/StartWithK8s 44.06
363 TestNoKubernetes/serial/StartWithStopK8s 31.41
364 TestNoKubernetes/serial/Start 11.45
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.51
367 TestNoKubernetes/serial/ProfileList 1.66
368 TestNoKubernetes/serial/Stop 1.38
369 TestNoKubernetes/serial/StartNoArgs 7.69
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.4
371 TestStoppedBinaryUpgrade/Setup 0.85
372 TestStoppedBinaryUpgrade/Upgrade 307.84
373 TestStoppedBinaryUpgrade/MinikubeLogs 1.94
382 TestPause/serial/Start 57.3
383 TestPause/serial/SecondStartNoReconfiguration 29.65
392 TestNetworkPlugins/group/false 3.61
397 TestStartStop/group/old-k8s-version/serial/FirstStart 62.39
398 TestStartStop/group/old-k8s-version/serial/DeployApp 9.42
400 TestStartStop/group/old-k8s-version/serial/Stop 12.01
401 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
402 TestStartStop/group/old-k8s-version/serial/SecondStart 54.84
403 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
404 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
405 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
408 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 58.94
410 TestStartStop/group/embed-certs/serial/FirstStart 56.42
411 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.41
413 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.07
414 TestStartStop/group/embed-certs/serial/DeployApp 9.41
415 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
416 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.97
418 TestStartStop/group/embed-certs/serial/Stop 12.94
419 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.3
420 TestStartStop/group/embed-certs/serial/SecondStart 48.51
421 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
422 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
423 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
425 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
428 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.14
429 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
435 TestStartStop/group/newest-cni/serial/DeployApp 0
437 TestStartStop/group/newest-cni/serial/Stop 1.29
438 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
440 TestStartStop/group/no-preload/serial/Stop 1.31
441 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
443 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
444 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
445 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
448 TestNetworkPlugins/group/auto/Start 53.17
449 TestNetworkPlugins/group/auto/KubeletFlags 0.33
450 TestNetworkPlugins/group/auto/NetCatPod 10.26
451 TestNetworkPlugins/group/auto/DNS 0.15
452 TestNetworkPlugins/group/auto/Localhost 0.12
453 TestNetworkPlugins/group/auto/HairPin 0.14
454 TestNetworkPlugins/group/kindnet/Start 51.62
455 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
456 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
457 TestNetworkPlugins/group/kindnet/NetCatPod 10.26
458 TestNetworkPlugins/group/kindnet/DNS 0.15
459 TestNetworkPlugins/group/kindnet/Localhost 0.14
460 TestNetworkPlugins/group/kindnet/HairPin 0.15
461 TestNetworkPlugins/group/calico/Start 66.56
462 TestNetworkPlugins/group/calico/ControllerPod 6
463 TestNetworkPlugins/group/calico/KubeletFlags 0.3
464 TestNetworkPlugins/group/calico/NetCatPod 9.29
465 TestNetworkPlugins/group/calico/DNS 0.15
466 TestNetworkPlugins/group/calico/Localhost 0.12
467 TestNetworkPlugins/group/calico/HairPin 0.14
468 TestNetworkPlugins/group/custom-flannel/Start 54.92
469 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
470 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.32
471 TestNetworkPlugins/group/custom-flannel/DNS 0.15
472 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
473 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
474 TestNetworkPlugins/group/enable-default-cni/Start 54.43
475 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
476 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.28
477 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
478 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
479 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
480 TestNetworkPlugins/group/flannel/Start 56.75
481 TestNetworkPlugins/group/flannel/ControllerPod 6.01
482 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
483 TestNetworkPlugins/group/flannel/NetCatPod 11.26
484 TestNetworkPlugins/group/flannel/DNS 0.15
485 TestNetworkPlugins/group/flannel/Localhost 0.13
486 TestNetworkPlugins/group/flannel/HairPin 0.2
488 TestNetworkPlugins/group/bridge/Start 77.12
489 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
490 TestNetworkPlugins/group/bridge/NetCatPod 10.27
491 TestNetworkPlugins/group/bridge/DNS 0.17
492 TestNetworkPlugins/group/bridge/Localhost 0.12
493 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (5.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-228427 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-228427 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.560022406s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1213 10:25:04.756922  356328 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1213 10:25:04.757007  356328 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-228427
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-228427: exit status 85 (86.713325ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-228427 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-228427 │ jenkins │ v1.37.0 │ 13 Dec 25 10:24 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:24:59
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:24:59.237761  356334 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:24:59.237978  356334 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:24:59.238007  356334 out.go:374] Setting ErrFile to fd 2...
	I1213 10:24:59.238027  356334 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:24:59.238423  356334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	W1213 10:24:59.238659  356334 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22127-354468/.minikube/config/config.json: open /home/jenkins/minikube-integration/22127-354468/.minikube/config/config.json: no such file or directory
	I1213 10:24:59.239190  356334 out.go:368] Setting JSON to true
	I1213 10:24:59.240642  356334 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7652,"bootTime":1765613848,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:24:59.240739  356334 start.go:143] virtualization:  
	I1213 10:24:59.246191  356334 out.go:99] [download-only-228427] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1213 10:24:59.246551  356334 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball: no such file or directory
	I1213 10:24:59.246638  356334 notify.go:221] Checking for updates...
	I1213 10:24:59.249997  356334 out.go:171] MINIKUBE_LOCATION=22127
	I1213 10:24:59.253602  356334 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:24:59.257016  356334 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:24:59.260449  356334 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 10:24:59.263687  356334 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1213 10:24:59.269882  356334 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 10:24:59.270245  356334 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:24:59.294270  356334 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:24:59.294375  356334 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:24:59.349486  356334 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-13 10:24:59.339635273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:24:59.349597  356334 docker.go:319] overlay module found
	I1213 10:24:59.352897  356334 out.go:99] Using the docker driver based on user configuration
	I1213 10:24:59.352943  356334 start.go:309] selected driver: docker
	I1213 10:24:59.352951  356334 start.go:927] validating driver "docker" against <nil>
	I1213 10:24:59.353057  356334 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:24:59.416042  356334 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-13 10:24:59.404927625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:24:59.416269  356334 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:24:59.416644  356334 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1213 10:24:59.416830  356334 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 10:24:59.420262  356334 out.go:171] Using Docker driver with root privileges
	I1213 10:24:59.423700  356334 cni.go:84] Creating CNI manager for ""
	I1213 10:24:59.423781  356334 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 10:24:59.423797  356334 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 10:24:59.423881  356334 start.go:353] cluster config:
	{Name:download-only-228427 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-228427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:24:59.426915  356334 out.go:99] Starting "download-only-228427" primary control-plane node in "download-only-228427" cluster
	I1213 10:24:59.426940  356334 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 10:24:59.429828  356334 out.go:99] Pulling base image v0.0.48-1765275396-22083 ...
	I1213 10:24:59.429885  356334 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 10:24:59.429977  356334 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 10:24:59.446029  356334 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 10:24:59.446208  356334 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 10:24:59.446309  356334 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 10:24:59.480373  356334 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:24:59.480406  356334 cache.go:65] Caching tarball of preloaded images
	I1213 10:24:59.480573  356334 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 10:24:59.483997  356334 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1213 10:24:59.484029  356334 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1213 10:24:59.566029  356334 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1213 10:24:59.566158  356334 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1213 10:25:02.949953  356334 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1213 10:25:02.950359  356334 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/download-only-228427/config.json ...
	I1213 10:25:02.950399  356334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/download-only-228427/config.json: {Name:mk771565569eb1463c5205f86bd91489122de641 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 10:25:02.950578  356334 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1213 10:25:02.950763  356334 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-228427 host does not exist
	  To start a cluster, run: "minikube start -p download-only-228427"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-228427
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-130157 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-130157 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.232632745s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1213 10:25:08.434121  356328 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1213 10:25:08.434159  356328 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-130157
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-130157: exit status 85 (90.215238ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-228427 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-228427 │ jenkins │ v1.37.0 │ 13 Dec 25 10:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ delete  │ -p download-only-228427                                                                                                                                                   │ download-only-228427 │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ start   │ -o=json --download-only -p download-only-130157 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-130157 │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:25:05
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:25:05.245690  356528 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:25:05.245908  356528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:25:05.245938  356528 out.go:374] Setting ErrFile to fd 2...
	I1213 10:25:05.245959  356528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:25:05.246235  356528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:25:05.246689  356528 out.go:368] Setting JSON to true
	I1213 10:25:05.247552  356528 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7658,"bootTime":1765613848,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:25:05.247647  356528 start.go:143] virtualization:  
	I1213 10:25:05.251094  356528 out.go:99] [download-only-130157] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:25:05.251345  356528 notify.go:221] Checking for updates...
	I1213 10:25:05.254239  356528 out.go:171] MINIKUBE_LOCATION=22127
	I1213 10:25:05.257201  356528 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:25:05.260244  356528 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:25:05.263201  356528 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 10:25:05.266184  356528 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1213 10:25:05.271957  356528 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 10:25:05.272247  356528 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:25:05.298822  356528 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:25:05.298943  356528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:25:05.357123  356528 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-13 10:25:05.347780673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:25:05.357240  356528 docker.go:319] overlay module found
	I1213 10:25:05.360176  356528 out.go:99] Using the docker driver based on user configuration
	I1213 10:25:05.360205  356528 start.go:309] selected driver: docker
	I1213 10:25:05.360212  356528 start.go:927] validating driver "docker" against <nil>
	I1213 10:25:05.360316  356528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:25:05.420888  356528 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-13 10:25:05.412101888 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:25:05.421033  356528 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:25:05.421326  356528 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1213 10:25:05.421479  356528 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 10:25:05.424525  356528 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-130157 host does not exist
	  To start a cluster, run: "minikube start -p download-only-130157"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-130157
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-963520 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-963520 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.448809022s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1213 10:25:12.318128  356328 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1213 10:25:12.318164  356328 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-963520
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-963520: exit status 85 (93.400381ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-228427 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-228427 │ jenkins │ v1.37.0 │ 13 Dec 25 10:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ delete  │ -p download-only-228427                                                                                                                                                          │ download-only-228427 │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ start   │ -o=json --download-only -p download-only-130157 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-130157 │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ delete  │ -p download-only-130157                                                                                                                                                          │ download-only-130157 │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │ 13 Dec 25 10:25 UTC │
	│ start   │ -o=json --download-only -p download-only-963520 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-963520 │ jenkins │ v1.37.0 │ 13 Dec 25 10:25 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 10:25:08
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 10:25:08.911696  356724 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:25:08.911812  356724 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:25:08.911823  356724 out.go:374] Setting ErrFile to fd 2...
	I1213 10:25:08.911829  356724 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:25:08.912078  356724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:25:08.912489  356724 out.go:368] Setting JSON to true
	I1213 10:25:08.913282  356724 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7661,"bootTime":1765613848,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:25:08.913352  356724 start.go:143] virtualization:  
	I1213 10:25:08.916764  356724 out.go:99] [download-only-963520] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:25:08.917045  356724 notify.go:221] Checking for updates...
	I1213 10:25:08.920711  356724 out.go:171] MINIKUBE_LOCATION=22127
	I1213 10:25:08.924183  356724 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:25:08.927118  356724 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:25:08.930033  356724 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 10:25:08.932891  356724 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1213 10:25:08.938820  356724 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 10:25:08.939129  356724 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:25:08.971931  356724 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:25:08.972060  356724 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:25:09.036491  356724 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-13 10:25:09.026567109 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:25:09.036594  356724 docker.go:319] overlay module found
	I1213 10:25:09.039653  356724 out.go:99] Using the docker driver based on user configuration
	I1213 10:25:09.039699  356724 start.go:309] selected driver: docker
	I1213 10:25:09.039708  356724 start.go:927] validating driver "docker" against <nil>
	I1213 10:25:09.039821  356724 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:25:09.103598  356724 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-13 10:25:09.094195355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:25:09.103754  356724 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 10:25:09.104032  356724 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1213 10:25:09.104194  356724 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 10:25:09.107330  356724 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-963520 host does not exist
	  To start a cluster, run: "minikube start -p download-only-963520"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-963520
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1213 10:25:13.577829  356328 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-392613 --alsologtostderr --binary-mirror http://127.0.0.1:41447 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-392613" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-392613
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-543946
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-543946: exit status 85 (80.169308ms)

                                                
                                                
-- stdout --
	* Profile "addons-543946" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-543946"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-543946
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-543946: exit status 85 (76.863336ms)

                                                
                                                
-- stdout --
	* Profile "addons-543946" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-543946"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (132.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-543946 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-543946 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m12.754886755s)
--- PASS: TestAddons/Setup (132.76s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-543946 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-543946 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.84s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-543946 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-543946 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c4936020-682f-4f78-8ab7-70b9e9cd5ae0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [c4936020-682f-4f78-8ab7-70b9e9cd5ae0] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003633858s
addons_test.go:696: (dbg) Run:  kubectl --context addons-543946 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-543946 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-543946 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-543946 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-543946
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-543946: (12.145669317s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-543946
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-543946
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-543946
--- PASS: TestAddons/StoppedEnableDisable (12.43s)

                                                
                                    
x
+
TestCertOptions (41.97s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-522461 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-522461 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (38.770036113s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-522461 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-522461 config view
E1213 11:47:00.470759  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-522461 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-522461" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-522461
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-522461: (2.147212153s)
--- PASS: TestCertOptions (41.97s)

                                                
                                    
x
+
TestCertExpiration (244.86s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-420007 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-420007 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.929138131s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-420007 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-420007 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (21.189319226s)
helpers_test.go:176: Cleaning up "cert-expiration-420007" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-420007
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-420007: (2.737924584s)
--- PASS: TestCertExpiration (244.86s)

                                                
                                    
x
+
TestForceSystemdFlag (38.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-267216 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-267216 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.149130163s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-267216 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-267216" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-267216
E1213 11:45:29.716263  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-267216: (2.567395811s)
--- PASS: TestForceSystemdFlag (38.02s)

                                                
                                    
x
+
TestForceSystemdEnv (37.98s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-181508 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-181508 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.20092635s)
helpers_test.go:176: Cleaning up "force-systemd-env-181508" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-181508
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-181508: (2.781147706s)
--- PASS: TestForceSystemdEnv (37.98s)

                                                
                                    
x
+
TestErrorSpam/setup (32.43s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-457060 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-457060 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-457060 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-457060 --driver=docker  --container-runtime=crio: (32.429459581s)
--- PASS: TestErrorSpam/setup (32.43s)

                                                
                                    
x
+
TestErrorSpam/start (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 start --dry-run
--- PASS: TestErrorSpam/start (0.82s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (7.11s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 pause: exit status 80 (2.46540224s)

                                                
                                                
-- stdout --
	* Pausing node nospam-457060 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:31:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 pause: exit status 80 (2.474221813s)

                                                
                                                
-- stdout --
	* Pausing node nospam-457060 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:31:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 pause: exit status 80 (2.171806031s)

                                                
                                                
-- stdout --
	* Pausing node nospam-457060 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:31:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (7.11s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.91s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 unpause: exit status 80 (1.658213259s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-457060 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:31:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 unpause: exit status 80 (2.118318157s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-457060 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:31:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 unpause: exit status 80 (2.127073337s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-457060 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T10:31:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.91s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 stop: (1.309879199s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-457060 --log_dir /tmp/nospam-457060 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (53.85s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-371413 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1213 10:32:27.932622  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:32:27.939219  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:32:27.950645  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:32:27.972071  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:32:28.013686  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:32:28.095110  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:32:28.256627  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:32:28.578368  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:32:29.220397  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:32:30.502034  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:32:33.063862  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-371413 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (53.846669679s)
--- PASS: TestFunctional/serial/StartWithProxy (53.85s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.9s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1213 10:32:38.127908  356328 config.go:182] Loaded profile config "functional-371413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-371413 --alsologtostderr -v=8
E1213 10:32:38.187027  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:32:48.429258  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-371413 --alsologtostderr -v=8: (28.896703327s)
functional_test.go:678: soft start took 28.899063558s for "functional-371413" cluster.
I1213 10:33:07.024981  356328 config.go:182] Loaded profile config "functional-371413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (28.90s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-371413 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-371413 cache add registry.k8s.io/pause:3.1: (1.222231227s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 cache add registry.k8s.io/pause:3.3
E1213 10:33:08.911096  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-371413 cache add registry.k8s.io/pause:3.3: (1.171743054s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-371413 cache add registry.k8s.io/pause:latest: (1.118632599s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-371413 /tmp/TestFunctionalserialCacheCmdcacheadd_local2313183702/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 cache add minikube-local-cache-test:functional-371413
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 cache delete minikube-local-cache-test:functional-371413
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-371413
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-371413 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (299.908373ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 kubectl -- --context functional-371413 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-371413 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.78s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-371413 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 10:33:49.872833  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-371413 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.781127732s)
functional_test.go:776: restart took 39.78121817s for "functional-371413" cluster.
I1213 10:33:54.325056  356328 config.go:182] Loaded profile config "functional-371413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (39.78s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-371413 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-371413 logs: (1.441913051s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 logs --file /tmp/TestFunctionalserialLogsFileCmd2588158036/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-371413 logs --file /tmp/TestFunctionalserialLogsFileCmd2588158036/001/logs.txt: (1.460404502s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-371413 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-371413
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-371413: exit status 115 (388.421153ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32081 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-371413 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-371413 config get cpus: exit status 14 (80.050366ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-371413 config get cpus: exit status 14 (70.90904ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-371413 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-371413 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 382633: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.44s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-371413 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-371413 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (197.116376ms)

                                                
                                                
-- stdout --
	* [functional-371413] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:34:36.207762  382196 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:34:36.207882  382196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:34:36.207893  382196 out.go:374] Setting ErrFile to fd 2...
	I1213 10:34:36.207899  382196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:34:36.208152  382196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:34:36.208502  382196 out.go:368] Setting JSON to false
	I1213 10:34:36.209475  382196 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8229,"bootTime":1765613848,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:34:36.209551  382196 start.go:143] virtualization:  
	I1213 10:34:36.213205  382196 out.go:179] * [functional-371413] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 10:34:36.217097  382196 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:34:36.217224  382196 notify.go:221] Checking for updates...
	I1213 10:34:36.222839  382196 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:34:36.225719  382196 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:34:36.228655  382196 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 10:34:36.231480  382196 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:34:36.234599  382196 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:34:36.238202  382196 config.go:182] Loaded profile config "functional-371413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:34:36.238766  382196 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:34:36.271351  382196 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:34:36.271503  382196 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:34:36.334815  382196 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 10:34:36.325836566 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:34:36.334928  382196 docker.go:319] overlay module found
	I1213 10:34:36.337961  382196 out.go:179] * Using the docker driver based on existing profile
	I1213 10:34:36.340792  382196 start.go:309] selected driver: docker
	I1213 10:34:36.340808  382196 start.go:927] validating driver "docker" against &{Name:functional-371413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-371413 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:34:36.340922  382196 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:34:36.344427  382196 out.go:203] 
	W1213 10:34:36.347422  382196 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 10:34:36.350329  382196 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-371413 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-371413 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-371413 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (203.082794ms)

                                                
                                                
-- stdout --
	* [functional-371413] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 10:34:36.646301  382315 out.go:360] Setting OutFile to fd 1 ...
	I1213 10:34:36.646482  382315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:34:36.646494  382315 out.go:374] Setting ErrFile to fd 2...
	I1213 10:34:36.646500  382315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 10:34:36.648584  382315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 10:34:36.649091  382315 out.go:368] Setting JSON to false
	I1213 10:34:36.649981  382315 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8229,"bootTime":1765613848,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 10:34:36.650059  382315 start.go:143] virtualization:  
	I1213 10:34:36.653278  382315 out.go:179] * [functional-371413] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1213 10:34:36.657086  382315 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 10:34:36.657225  382315 notify.go:221] Checking for updates...
	I1213 10:34:36.662851  382315 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 10:34:36.665713  382315 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 10:34:36.669670  382315 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 10:34:36.672589  382315 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 10:34:36.675306  382315 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 10:34:36.678559  382315 config.go:182] Loaded profile config "functional-371413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 10:34:36.679171  382315 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 10:34:36.709170  382315 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 10:34:36.709286  382315 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 10:34:36.775846  382315 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-13 10:34:36.766287349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 10:34:36.775951  382315 docker.go:319] overlay module found
	I1213 10:34:36.779102  382315 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 10:34:36.781813  382315 start.go:309] selected driver: docker
	I1213 10:34:36.781854  382315 start.go:927] validating driver "docker" against &{Name:functional-371413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-371413 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 10:34:36.781955  382315 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 10:34:36.785359  382315 out.go:203] 
	W1213 10:34:36.788168  382315 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 10:34:36.791113  382315 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-371413 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-371413 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-tv4kc" [2704c254-3193-43c2-818c-8e2f9bb3e813] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-tv4kc" [2704c254-3193-43c2-818c-8e2f9bb3e813] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003534767s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31575
functional_test.go:1680: http://192.168.49.2:31575: success! body:
Request served by hello-node-connect-7d85dfc575-tv4kc

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31575
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (19.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [72b190c3-42b5-4d26-bb99-b78e137c750f] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006674526s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-371413 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-371413 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-371413 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-371413 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [60ff00f8-e1c1-4025-88f1-5250688071c4] Pending
helpers_test.go:353: "sp-pod" [60ff00f8-e1c1-4025-88f1-5250688071c4] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004282569s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-371413 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-371413 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-371413 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [536935af-9a5a-48db-b54c-65ea6bed6d4d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [536935af-9a5a-48db-b54c-65ea6bed6d4d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.002975887s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-371413 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (19.66s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh -n functional-371413 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 cp functional-371413:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1545859346/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh -n functional-371413 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh -n functional-371413 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/356328/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh "sudo cat /etc/test/nested/copy/356328/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/356328.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh "sudo cat /etc/ssl/certs/356328.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/356328.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh "sudo cat /usr/share/ca-certificates/356328.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3563282.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh "sudo cat /etc/ssl/certs/3563282.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3563282.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh "sudo cat /usr/share/ca-certificates/3563282.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-371413 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-371413 ssh "sudo systemctl is-active docker": exit status 1 (377.794896ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-371413 ssh "sudo systemctl is-active containerd": exit status 1 (404.229736ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-371413 version -o=json --components: (1.059942447s)
--- PASS: TestFunctional/parallel/Version/components (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-371413 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-371413
localhost/kicbase/echo-server:functional-371413
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-371413 image ls --format short --alsologtostderr:
I1213 10:34:46.084632  383897 out.go:360] Setting OutFile to fd 1 ...
I1213 10:34:46.084850  383897 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:34:46.084873  383897 out.go:374] Setting ErrFile to fd 2...
I1213 10:34:46.084891  383897 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:34:46.085180  383897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
I1213 10:34:46.085793  383897 config.go:182] Loaded profile config "functional-371413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 10:34:46.085961  383897 config.go:182] Loaded profile config "functional-371413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 10:34:46.086519  383897 cli_runner.go:164] Run: docker container inspect functional-371413 --format={{.State.Status}}
I1213 10:34:46.109782  383897 ssh_runner.go:195] Run: systemctl --version
I1213 10:34:46.109851  383897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-371413
I1213 10:34:46.143236  383897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-371413/id_rsa Username:docker}
I1213 10:34:46.258392  383897 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-371413 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                  IMAGE                  │                  TAG                  │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ latest                                │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2                               │ 4f982e73e768a │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.10.1                                │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                                   │ 3d18732f8686c │ 487kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b                    │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0                               │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2                               │ b178af3d91f80 │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2                               │ 1b34917560f09 │ 72.6MB │
│ docker.io/kicbase/echo-server           │ latest                                │ ce2d2cda2d858 │ 4.79MB │
│ localhost/kicbase/echo-server           │ functional-371413                     │ ce2d2cda2d858 │ 4.79MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc                          │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                                    │ ba04bb24b9575 │ 29MB   │
│ localhost/my-image                      │ functional-371413                     │ ad2d431acb0d3 │ 1.64MB │
│ public.ecr.aws/nginx/nginx              │ alpine                                │ 10afed3caf3ee │ 55.1MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1                               │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/pause                   │ 3.1                                   │ 8057e0500773a │ 529kB  │
│ docker.io/kindest/kindnetd              │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ c96ee3c174987 │ 108MB  │
│ localhost/minikube-local-cache-test     │ functional-371413                     │ 8ae3927b0d929 │ 3.33kB │
│ registry.k8s.io/kube-proxy              │ v1.34.2                               │ 94bff1bec29fd │ 75.9MB │
│ registry.k8s.io/pause                   │ latest                                │ 8cb2091f603e7 │ 246kB  │
└─────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-371413 image ls --format table --alsologtostderr:
I1213 10:34:51.301385  384391 out.go:360] Setting OutFile to fd 1 ...
I1213 10:34:51.301567  384391 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:34:51.301594  384391 out.go:374] Setting ErrFile to fd 2...
I1213 10:34:51.301614  384391 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:34:51.301917  384391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
I1213 10:34:51.302588  384391 config.go:182] Loaded profile config "functional-371413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 10:34:51.302759  384391 config.go:182] Loaded profile config "functional-371413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 10:34:51.303323  384391 cli_runner.go:164] Run: docker container inspect functional-371413 --format={{.State.Status}}
I1213 10:34:51.320599  384391 ssh_runner.go:195] Run: systemctl --version
I1213 10:34:51.320666  384391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-371413
I1213 10:34:51.338623  384391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-371413/id_rsa Username:docker}
I1213 10:34:51.442412  384391 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 image ls --format json --alsologtostderr
2025/12/13 10:34:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-371413 image ls --format json --alsologtostderr:
[{"id":"c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae","docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"108362109"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65
f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"ad2d431acb0d3630455055089fe3af09b05e208569598b94f21fadafc509a960","repoDigests":["localhost/my-image@sha256:c2e61fe3ba2b173e218df9a1741ad6815e0fca71ea62da5ed0e78ab93e36c252"],"repoTags":["localhost/my-image:functional-371413"],"size":"1640791"},{"id":"10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4","repoDigests":["public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d","public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size
":"55077248"},{"id":"1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89","registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"72629077"},{"id":"3e76e41439da844ef5fd7cecd1eedb3dc90a96864d2616c0ef166d24d860da1b","repoDigests":["docker.io/library/b972daf9ea3d1ea5082f97b775326c7ad78a5ad0a5b30682e252152d02a86fa7-tmp@sha256:ad1f8c03527b2cd3078b1c89e79bb9a99ce3ccdde2d72c17aa583b2812840c5c"],"repoTags":[],"size":"1638179"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/bus
ybox:latest"],"size":"1634527"},{"id":"b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84","registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"84753391"},{"id":"4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe","registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"51592021"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id"
:"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","
repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"8ae3927b0d9294a77fef92f0725f51a8c81d6d3a1fd96e23258fed48689ebad5","repoDigests":["localhost/minikube-local-cache-test@sha256:147e820888449616cacd7db7264ea8e9dca9ae227bd736971c4fa6a0fa914690"],"repoTags":["localhost/minikube-local-cache-test:functional-371413"],"size":"3330"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc",
"repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"75941783"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@
sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-371413"],"size":"4788229"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-371413 image ls --format json --alsologtostderr:
I1213 10:34:51.071260  384357 out.go:360] Setting OutFile to fd 1 ...
I1213 10:34:51.071389  384357 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:34:51.071400  384357 out.go:374] Setting ErrFile to fd 2...
I1213 10:34:51.071405  384357 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:34:51.071982  384357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
I1213 10:34:51.072619  384357 config.go:182] Loaded profile config "functional-371413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 10:34:51.072737  384357 config.go:182] Loaded profile config "functional-371413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 10:34:51.073261  384357 cli_runner.go:164] Run: docker container inspect functional-371413 --format={{.State.Status}}
I1213 10:34:51.090426  384357 ssh_runner.go:195] Run: systemctl --version
I1213 10:34:51.090486  384357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-371413
I1213 10:34:51.108929  384357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-371413/id_rsa Username:docker}
I1213 10:34:51.218414  384357 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-371413 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "72629077"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-371413
size: "4788229"
- id: 8ae3927b0d9294a77fef92f0725f51a8c81d6d3a1fd96e23258fed48689ebad5
repoDigests:
- localhost/minikube-local-cache-test@sha256:147e820888449616cacd7db7264ea8e9dca9ae227bd736971c4fa6a0fa914690
repoTags:
- localhost/minikube-local-cache-test:functional-371413
size: "3330"
- id: 10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55077248"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "84753391"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
- docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "108362109"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "75941783"
- id: 4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "51592021"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-371413 image ls --format yaml --alsologtostderr:
I1213 10:34:46.649007  383945 out.go:360] Setting OutFile to fd 1 ...
I1213 10:34:46.649219  383945 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:34:46.649250  383945 out.go:374] Setting ErrFile to fd 2...
I1213 10:34:46.649273  383945 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:34:46.649547  383945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
I1213 10:34:46.650196  383945 config.go:182] Loaded profile config "functional-371413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 10:34:46.650359  383945 config.go:182] Loaded profile config "functional-371413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 10:34:46.650929  383945 cli_runner.go:164] Run: docker container inspect functional-371413 --format={{.State.Status}}
I1213 10:34:46.689773  383945 ssh_runner.go:195] Run: systemctl --version
I1213 10:34:46.689835  383945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-371413
I1213 10:34:46.716290  383945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-371413/id_rsa Username:docker}
I1213 10:34:46.830705  383945 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-371413 ssh pgrep buildkitd: exit status 1 (396.360236ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 image build -t localhost/my-image:functional-371413 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-371413 image build -t localhost/my-image:functional-371413 testdata/build --alsologtostderr: (3.477039962s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-371413 image build -t localhost/my-image:functional-371413 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3e76e41439d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-371413
--> ad2d431acb0
Successfully tagged localhost/my-image:functional-371413
ad2d431acb0d3630455055089fe3af09b05e208569598b94f21fadafc509a960
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-371413 image build -t localhost/my-image:functional-371413 testdata/build --alsologtostderr:
I1213 10:34:47.355389  384059 out.go:360] Setting OutFile to fd 1 ...
I1213 10:34:47.356611  384059 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:34:47.356658  384059 out.go:374] Setting ErrFile to fd 2...
I1213 10:34:47.356678  384059 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:34:47.357004  384059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
I1213 10:34:47.357703  384059 config.go:182] Loaded profile config "functional-371413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 10:34:47.359077  384059 config.go:182] Loaded profile config "functional-371413": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 10:34:47.359717  384059 cli_runner.go:164] Run: docker container inspect functional-371413 --format={{.State.Status}}
I1213 10:34:47.391548  384059 ssh_runner.go:195] Run: systemctl --version
I1213 10:34:47.391600  384059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-371413
I1213 10:34:47.410537  384059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-371413/id_rsa Username:docker}
I1213 10:34:47.515932  384059 build_images.go:162] Building image from path: /tmp/build.1145876426.tar
I1213 10:34:47.516047  384059 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 10:34:47.524723  384059 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1145876426.tar
I1213 10:34:47.529268  384059 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1145876426.tar: stat -c "%s %y" /var/lib/minikube/build/build.1145876426.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1145876426.tar': No such file or directory
I1213 10:34:47.529341  384059 ssh_runner.go:362] scp /tmp/build.1145876426.tar --> /var/lib/minikube/build/build.1145876426.tar (3072 bytes)
I1213 10:34:47.547483  384059 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1145876426
I1213 10:34:47.555310  384059 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1145876426 -xf /var/lib/minikube/build/build.1145876426.tar
I1213 10:34:47.563601  384059 crio.go:315] Building image: /var/lib/minikube/build/build.1145876426
I1213 10:34:47.563714  384059 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-371413 /var/lib/minikube/build/build.1145876426 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1213 10:34:50.736276  384059 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-371413 /var/lib/minikube/build/build.1145876426 --cgroup-manager=cgroupfs: (3.172509632s)
I1213 10:34:50.736337  384059 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1145876426
I1213 10:34:50.744471  384059 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1145876426.tar
I1213 10:34:50.752208  384059 build_images.go:218] Built localhost/my-image:functional-371413 from /tmp/build.1145876426.tar
I1213 10:34:50.752240  384059 build_images.go:134] succeeded building to: functional-371413
I1213 10:34:50.752247  384059 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-371413
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 image load --daemon kicbase/echo-server:functional-371413 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-371413 image load --daemon kicbase/echo-server:functional-371413 --alsologtostderr: (1.244902892s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 image load --daemon kicbase/echo-server:functional-371413 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-371413 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-371413 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-f8rbt" [4b738fe0-6217-4d0b-ab56-bd107362f754] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-f8rbt" [4b738fe0-6217-4d0b-ab56-bd107362f754] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003288921s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-371413
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 image load --daemon kicbase/echo-server:functional-371413 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 image save kicbase/echo-server:functional-371413 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 image rm kicbase/echo-server:functional-371413 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-371413
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 image save --daemon kicbase/echo-server:functional-371413 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-371413
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-371413 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-371413 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-371413 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 380259: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-371413 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-371413 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-371413 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [6a57be99-8cbe-4938-b209-e688d35c18cb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [6a57be99-8cbe-4938-b209-e688d35c18cb] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003963445s
I1213 10:34:20.098465  356328 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 service list -o json
functional_test.go:1504: Took "371.43465ms" to run "out/minikube-linux-arm64 -p functional-371413 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30333
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30333
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-371413 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.134.106 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-371413 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "408.516672ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "55.264459ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "361.721587ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "60.760138ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-371413 /tmp/TestFunctionalparallelMountCmdany-port3338218401/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765622070675820432" to /tmp/TestFunctionalparallelMountCmdany-port3338218401/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765622070675820432" to /tmp/TestFunctionalparallelMountCmdany-port3338218401/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765622070675820432" to /tmp/TestFunctionalparallelMountCmdany-port3338218401/001/test-1765622070675820432
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-371413 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (351.69755ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 10:34:31.027801  356328 retry.go:31] will retry after 643.296926ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 10:34 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 10:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 10:34 test-1765622070675820432
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh cat /mount-9p/test-1765622070675820432
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-371413 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [76f2c53d-4b01-4aa8-821f-bc7973e122f6] Pending
helpers_test.go:353: "busybox-mount" [76f2c53d-4b01-4aa8-821f-bc7973e122f6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [76f2c53d-4b01-4aa8-821f-bc7973e122f6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [76f2c53d-4b01-4aa8-821f-bc7973e122f6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003550803s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-371413 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-371413 /tmp/TestFunctionalparallelMountCmdany-port3338218401/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-371413 /tmp/TestFunctionalparallelMountCmdspecific-port1938572229/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-371413 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (563.597174ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 10:34:39.542750  356328 retry.go:31] will retry after 722.384985ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-371413 /tmp/TestFunctionalparallelMountCmdspecific-port1938572229/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-371413 ssh "sudo umount -f /mount-9p": exit status 1 (405.87725ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-371413 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-371413 /tmp/TestFunctionalparallelMountCmdspecific-port1938572229/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-371413 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3564307197/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-371413 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3564307197/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-371413 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3564307197/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-371413 ssh "findmnt -T" /mount1: exit status 1 (1.048233937s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 10:34:42.589085  356328 retry.go:31] will retry after 393.824643ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-371413 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-371413 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-371413 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3564307197/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-371413 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3564307197/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-371413 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3564307197/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.68s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-371413
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-371413
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-371413
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22127-354468/.minikube/files/etc/test/nested/copy/356328/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-407525 cache add registry.k8s.io/pause:3.1: (1.212765302s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-407525 cache add registry.k8s.io/pause:3.3: (1.152112023s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-407525 cache add registry.k8s.io/pause:latest: (1.136125982s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach180125981/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 cache add minikube-local-cache-test:functional-407525
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 cache delete minikube-local-cache-test:functional-407525
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-407525
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-407525 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (298.631774ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2586355652/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-407525 config get cpus: exit status 14 (72.995405ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-407525 config get cpus: exit status 14 (63.123967ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-407525 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-407525 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (211.53948ms)

                                                
                                                
-- stdout --
	* [functional-407525] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:03:59.061147  413657 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:03:59.061250  413657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:03:59.061256  413657 out.go:374] Setting ErrFile to fd 2...
	I1213 11:03:59.061262  413657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:03:59.061609  413657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:03:59.062022  413657 out.go:368] Setting JSON to false
	I1213 11:03:59.063799  413657 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9991,"bootTime":1765613848,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:03:59.063912  413657 start.go:143] virtualization:  
	I1213 11:03:59.069826  413657 out.go:179] * [functional-407525] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:03:59.073085  413657 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:03:59.073181  413657 notify.go:221] Checking for updates...
	I1213 11:03:59.077378  413657 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:03:59.080320  413657 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:03:59.083349  413657 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:03:59.086859  413657 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:03:59.089841  413657 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:03:59.093322  413657 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:03:59.093913  413657 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:03:59.129970  413657 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:03:59.130153  413657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:03:59.185269  413657 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:03:59.175968787 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:03:59.185373  413657 docker.go:319] overlay module found
	I1213 11:03:59.188466  413657 out.go:179] * Using the docker driver based on existing profile
	I1213 11:03:59.191351  413657 start.go:309] selected driver: docker
	I1213 11:03:59.191378  413657 start.go:927] validating driver "docker" against &{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:03:59.191500  413657 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:03:59.195189  413657 out.go:203] 
	W1213 11:03:59.198031  413657 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 11:03:59.200863  413657 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-407525 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-407525 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-407525 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (192.627913ms)

                                                
                                                
-- stdout --
	* [functional-407525] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:03:58.856694  413609 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:03:58.856940  413609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:03:58.856972  413609 out.go:374] Setting ErrFile to fd 2...
	I1213 11:03:58.856994  413609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:03:58.857407  413609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:03:58.857851  413609 out.go:368] Setting JSON to false
	I1213 11:03:58.858738  413609 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9991,"bootTime":1765613848,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:03:58.858908  413609 start.go:143] virtualization:  
	I1213 11:03:58.862324  413609 out.go:179] * [functional-407525] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1213 11:03:58.866135  413609 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:03:58.866258  413609 notify.go:221] Checking for updates...
	I1213 11:03:58.872373  413609 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:03:58.875333  413609 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:03:58.878187  413609 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:03:58.881258  413609 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:03:58.884106  413609 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:03:58.887713  413609 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:03:58.888335  413609 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:03:58.916210  413609 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:03:58.916322  413609 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:03:58.973079  413609 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:03:58.963067693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:03:58.973189  413609 docker.go:319] overlay module found
	I1213 11:03:58.976364  413609 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 11:03:58.979233  413609 start.go:309] selected driver: docker
	I1213 11:03:58.979260  413609 start.go:927] validating driver "docker" against &{Name:functional-407525 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-407525 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 11:03:58.979383  413609 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:03:58.983100  413609 out.go:203] 
	W1213 11:03:58.986018  413609 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 11:03:58.988933  413609 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh -n functional-407525 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 cp functional-407525:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3835676047/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh -n functional-407525 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh -n functional-407525 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/356328/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "sudo cat /etc/test/nested/copy/356328/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/356328.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "sudo cat /etc/ssl/certs/356328.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/356328.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "sudo cat /usr/share/ca-certificates/356328.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3563282.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "sudo cat /etc/ssl/certs/3563282.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3563282.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "sudo cat /usr/share/ca-certificates/3563282.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-407525 ssh "sudo systemctl is-active docker": exit status 1 (268.761544ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-407525 ssh "sudo systemctl is-active containerd": exit status 1 (293.879214ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-407525 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-407525 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "322.491575ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "52.152866ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "348.074473ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "63.965347ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1621853940/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-407525 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (340.252131ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 11:03:52.214468  356328 retry.go:31] will retry after 295.360914ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1621853940/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-407525 ssh "sudo umount -f /mount-9p": exit status 1 (269.141175ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-407525 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1621853940/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1733713432/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1733713432/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1733713432/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-407525 ssh "findmnt -T" /mount1: exit status 1 (599.593456ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 11:03:54.143732  356328 retry.go:31] will retry after 684.930068ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-407525 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1733713432/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1733713432/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-407525 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1733713432/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-407525 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-407525
localhost/kicbase/echo-server:functional-407525
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-407525 image ls --format short --alsologtostderr:
I1213 11:04:11.889331  415810 out.go:360] Setting OutFile to fd 1 ...
I1213 11:04:11.889493  415810 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 11:04:11.889523  415810 out.go:374] Setting ErrFile to fd 2...
I1213 11:04:11.889544  415810 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 11:04:11.889791  415810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
I1213 11:04:11.890428  415810 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 11:04:11.890594  415810 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 11:04:11.891145  415810 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
I1213 11:04:11.908299  415810 ssh_runner.go:195] Run: systemctl --version
I1213 11:04:11.908362  415810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
I1213 11:04:11.924960  415810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
I1213 11:04:12.030733  415810 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-407525 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/kicbase/echo-server           │ functional-407525  │ ce2d2cda2d858 │ 4.79MB │
│ localhost/minikube-local-cache-test     │ functional-407525  │ 8ae3927b0d929 │ 3.33kB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 404c2e1286177 │ 74.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ localhost/my-image                      │ functional-407525  │ 6b57c2d28a031 │ 1.64MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 16378741539f1 │ 49.8MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 68b5f775f1876 │ 72.2MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ ccd634d9bcc36 │ 85MB   │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-407525 image ls --format table --alsologtostderr:
I1213 11:04:16.373702  416305 out.go:360] Setting OutFile to fd 1 ...
I1213 11:04:16.373980  416305 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 11:04:16.374015  416305 out.go:374] Setting ErrFile to fd 2...
I1213 11:04:16.374036  416305 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 11:04:16.374910  416305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
I1213 11:04:16.375876  416305 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 11:04:16.376015  416305 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 11:04:16.376543  416305 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
I1213 11:04:16.393766  416305 ssh_runner.go:195] Run: systemctl --version
I1213 11:04:16.393827  416305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
I1213 11:04:16.412452  416305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
I1213 11:04:16.518121  416305 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-407525 image ls --format json --alsologtostderr:
[{"id":"404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478","registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"74106775"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"31b6feb8c3ef9d6c39a98071ed602cba87a761e115641e6adf7517b60b063b50","repoDigests":["docker.io/library/87ec83872ab361d2a079b8b7f6e6226832433ed4c4aa0602b353d4c18ae597fd-tmp@sha256:c4232f08e46de958d685c597f53d1a4f22dc12e97e9229677a718f1ad5b07d51"],"repoTags":[],"size":"1638178"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provis
ioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"84949999"}
,{"id":"8ae3927b0d9294a77fef92f0725f51a8c81d6d3a1fd96e23258fed48689ebad5","repoDigests":["localhost/minikube-local-cache-test@sha256:147e820888449616cacd7db7264ea8e9dca9ae227bd736971c4fa6a0fa914690"],"repoTags":["localhost/minikube-local-cache-test:functional-407525"],"size":"3330"},{"id":"6b57c2d28a031af127e4612fcbbc89c4622f35ae13551e9ee6fcda066cef6491","repoDigests":["localhost/my-image@sha256:4d8f4c481d6e94aadd679e445c706a5e7e13b57f38d412be5c31ae80ac50727c"],"repoTags":["localhost/my-image:functional-407525"],"size":"1640790"},{"id":"68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"72170325"},{"id":"16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDige
sts":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"49822549"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380
b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534
d4a"],"repoTags":["localhost/kicbase/echo-server:functional-407525"],"size":"4788229"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-407525 image ls --format json --alsologtostderr:
I1213 11:04:16.129653  416264 out.go:360] Setting OutFile to fd 1 ...
I1213 11:04:16.130042  416264 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 11:04:16.130050  416264 out.go:374] Setting ErrFile to fd 2...
I1213 11:04:16.130055  416264 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 11:04:16.130331  416264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
I1213 11:04:16.131026  416264 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 11:04:16.131142  416264 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 11:04:16.131679  416264 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
I1213 11:04:16.149047  416264 ssh_runner.go:195] Run: systemctl --version
I1213 11:04:16.149107  416264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
I1213 11:04:16.166246  416264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
I1213 11:04:16.270618  416264 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-407525 image ls --format yaml --alsologtostderr:
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "84949999"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-407525
size: "4788229"
- id: 8ae3927b0d9294a77fef92f0725f51a8c81d6d3a1fd96e23258fed48689ebad5
repoDigests:
- localhost/minikube-local-cache-test@sha256:147e820888449616cacd7db7264ea8e9dca9ae227bd736971c4fa6a0fa914690
repoTags:
- localhost/minikube-local-cache-test:functional-407525
size: "3330"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "72170325"
- id: 404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "74106775"
- id: 16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "49822549"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-407525 image ls --format yaml --alsologtostderr:
I1213 11:04:12.119907  415846 out.go:360] Setting OutFile to fd 1 ...
I1213 11:04:12.120066  415846 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 11:04:12.120097  415846 out.go:374] Setting ErrFile to fd 2...
I1213 11:04:12.120128  415846 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 11:04:12.120398  415846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
I1213 11:04:12.121020  415846 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 11:04:12.121194  415846 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 11:04:12.121748  415846 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
I1213 11:04:12.138550  415846 ssh_runner.go:195] Run: systemctl --version
I1213 11:04:12.138608  415846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
I1213 11:04:12.155955  415846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
I1213 11:04:12.258195  415846 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-407525 ssh pgrep buildkitd: exit status 1 (260.49192ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 image build -t localhost/my-image:functional-407525 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-407525 image build -t localhost/my-image:functional-407525 testdata/build --alsologtostderr: (3.282105694s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-407525 image build -t localhost/my-image:functional-407525 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 31b6feb8c3e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-407525
--> 6b57c2d28a0
Successfully tagged localhost/my-image:functional-407525
6b57c2d28a031af127e4612fcbbc89c4622f35ae13551e9ee6fcda066cef6491
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-407525 image build -t localhost/my-image:functional-407525 testdata/build --alsologtostderr:
I1213 11:04:12.616371  415955 out.go:360] Setting OutFile to fd 1 ...
I1213 11:04:12.616495  415955 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 11:04:12.616508  415955 out.go:374] Setting ErrFile to fd 2...
I1213 11:04:12.616527  415955 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 11:04:12.616799  415955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
I1213 11:04:12.617442  415955 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 11:04:12.618002  415955 config.go:182] Loaded profile config "functional-407525": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 11:04:12.618619  415955 cli_runner.go:164] Run: docker container inspect functional-407525 --format={{.State.Status}}
I1213 11:04:12.636287  415955 ssh_runner.go:195] Run: systemctl --version
I1213 11:04:12.636346  415955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-407525
I1213 11:04:12.653141  415955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/functional-407525/id_rsa Username:docker}
I1213 11:04:12.757938  415955 build_images.go:162] Building image from path: /tmp/build.592131313.tar
I1213 11:04:12.758060  415955 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 11:04:12.765653  415955 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.592131313.tar
I1213 11:04:12.769251  415955 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.592131313.tar: stat -c "%s %y" /var/lib/minikube/build/build.592131313.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.592131313.tar': No such file or directory
I1213 11:04:12.769284  415955 ssh_runner.go:362] scp /tmp/build.592131313.tar --> /var/lib/minikube/build/build.592131313.tar (3072 bytes)
I1213 11:04:12.786688  415955 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.592131313
I1213 11:04:12.794282  415955 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.592131313 -xf /var/lib/minikube/build/build.592131313.tar
I1213 11:04:12.802155  415955 crio.go:315] Building image: /var/lib/minikube/build/build.592131313
I1213 11:04:12.802262  415955 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-407525 /var/lib/minikube/build/build.592131313 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1213 11:04:15.816700  415955 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-407525 /var/lib/minikube/build/build.592131313 --cgroup-manager=cgroupfs: (3.014408243s)
I1213 11:04:15.816764  415955 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.592131313
I1213 11:04:15.824422  415955 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.592131313.tar
I1213 11:04:15.831591  415955 build_images.go:218] Built localhost/my-image:functional-407525 from /tmp/build.592131313.tar
I1213 11:04:15.831625  415955 build_images.go:134] succeeded building to: functional-407525
I1213 11:04:15.831630  415955 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-407525
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 image load --daemon kicbase/echo-server:functional-407525 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 image load --daemon kicbase/echo-server:functional-407525 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-407525
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 image load --daemon kicbase/echo-server:functional-407525 --alsologtostderr
E1213 11:04:06.640211  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 image save kicbase/echo-server:functional-407525 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 image rm kicbase/echo-server:functional-407525 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-407525
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 image save --daemon kicbase/echo-server:functional-407525 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-407525
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-407525 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-407525
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-407525
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-407525
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (152.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1213 11:07:00.470770  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:07:00.477075  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:07:00.488431  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:07:00.509914  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:07:00.551287  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:07:00.632733  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:07:00.794206  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:07:01.115887  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:07:01.757904  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:07:03.039278  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:07:05.600587  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:07:10.721923  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:07:20.963725  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:07:27.930331  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:07:41.445925  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:08:22.407669  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-847754 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m31.659578162s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (152.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-847754 kubectl -- rollout status deployment/busybox: (4.729894716s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 kubectl -- exec busybox-7b57f96db7-2kltr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 kubectl -- exec busybox-7b57f96db7-8l5sr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 kubectl -- exec busybox-7b57f96db7-dcj52 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 kubectl -- exec busybox-7b57f96db7-2kltr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 kubectl -- exec busybox-7b57f96db7-8l5sr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 kubectl -- exec busybox-7b57f96db7-dcj52 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 kubectl -- exec busybox-7b57f96db7-2kltr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 kubectl -- exec busybox-7b57f96db7-8l5sr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 kubectl -- exec busybox-7b57f96db7-dcj52 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 kubectl -- exec busybox-7b57f96db7-2kltr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 kubectl -- exec busybox-7b57f96db7-2kltr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 kubectl -- exec busybox-7b57f96db7-8l5sr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 kubectl -- exec busybox-7b57f96db7-8l5sr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 kubectl -- exec busybox-7b57f96db7-dcj52 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 kubectl -- exec busybox-7b57f96db7-dcj52 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (35.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 node add --alsologtostderr -v 5
E1213 11:09:06.639682  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-847754 node add --alsologtostderr -v 5: (34.712570131s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-847754 status --alsologtostderr -v 5: (1.035597586s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (35.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-847754 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.066618463s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-847754 status --output json --alsologtostderr -v 5: (1.093095372s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 cp testdata/cp-test.txt ha-847754:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 cp ha-847754:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1217798807/001/cp-test_ha-847754.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 cp ha-847754:/home/docker/cp-test.txt ha-847754-m02:/home/docker/cp-test_ha-847754_ha-847754-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m02 "sudo cat /home/docker/cp-test_ha-847754_ha-847754-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 cp ha-847754:/home/docker/cp-test.txt ha-847754-m03:/home/docker/cp-test_ha-847754_ha-847754-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m03 "sudo cat /home/docker/cp-test_ha-847754_ha-847754-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 cp ha-847754:/home/docker/cp-test.txt ha-847754-m04:/home/docker/cp-test_ha-847754_ha-847754-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m04 "sudo cat /home/docker/cp-test_ha-847754_ha-847754-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 cp testdata/cp-test.txt ha-847754-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 cp ha-847754-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1217798807/001/cp-test_ha-847754-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 cp ha-847754-m02:/home/docker/cp-test.txt ha-847754:/home/docker/cp-test_ha-847754-m02_ha-847754.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754 "sudo cat /home/docker/cp-test_ha-847754-m02_ha-847754.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 cp ha-847754-m02:/home/docker/cp-test.txt ha-847754-m03:/home/docker/cp-test_ha-847754-m02_ha-847754-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m03 "sudo cat /home/docker/cp-test_ha-847754-m02_ha-847754-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 cp ha-847754-m02:/home/docker/cp-test.txt ha-847754-m04:/home/docker/cp-test_ha-847754-m02_ha-847754-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m04 "sudo cat /home/docker/cp-test_ha-847754-m02_ha-847754-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 cp testdata/cp-test.txt ha-847754-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 cp ha-847754-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1217798807/001/cp-test_ha-847754-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 cp ha-847754-m03:/home/docker/cp-test.txt ha-847754:/home/docker/cp-test_ha-847754-m03_ha-847754.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754 "sudo cat /home/docker/cp-test_ha-847754-m03_ha-847754.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 cp ha-847754-m03:/home/docker/cp-test.txt ha-847754-m02:/home/docker/cp-test_ha-847754-m03_ha-847754-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m02 "sudo cat /home/docker/cp-test_ha-847754-m03_ha-847754-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 cp ha-847754-m03:/home/docker/cp-test.txt ha-847754-m04:/home/docker/cp-test_ha-847754-m03_ha-847754-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m04 "sudo cat /home/docker/cp-test_ha-847754-m03_ha-847754-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 cp testdata/cp-test.txt ha-847754-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 cp ha-847754-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1217798807/001/cp-test_ha-847754-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 cp ha-847754-m04:/home/docker/cp-test.txt ha-847754:/home/docker/cp-test_ha-847754-m04_ha-847754.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754 "sudo cat /home/docker/cp-test_ha-847754-m04_ha-847754.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 cp ha-847754-m04:/home/docker/cp-test.txt ha-847754-m02:/home/docker/cp-test_ha-847754-m04_ha-847754-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m02 "sudo cat /home/docker/cp-test_ha-847754-m04_ha-847754-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 cp ha-847754-m04:/home/docker/cp-test.txt ha-847754-m03:/home/docker/cp-test_ha-847754-m04_ha-847754-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 ssh -n ha-847754-m03 "sudo cat /home/docker/cp-test_ha-847754-m04_ha-847754-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 node stop m02 --alsologtostderr -v 5
E1213 11:09:44.329743  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-847754 node stop m02 --alsologtostderr -v 5: (12.092691379s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-847754 status --alsologtostderr -v 5: exit status 7 (815.096554ms)

                                                
                                                
-- stdout --
	ha-847754
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-847754-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-847754-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-847754-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:09:55.442778  432231 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:09:55.442984  432231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:09:55.443011  432231 out.go:374] Setting ErrFile to fd 2...
	I1213 11:09:55.443033  432231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:09:55.443414  432231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:09:55.443751  432231 out.go:368] Setting JSON to false
	I1213 11:09:55.443802  432231 mustload.go:66] Loading cluster: ha-847754
	I1213 11:09:55.444576  432231 config.go:182] Loaded profile config "ha-847754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:09:55.444633  432231 status.go:174] checking status of ha-847754 ...
	I1213 11:09:55.445653  432231 notify.go:221] Checking for updates...
	I1213 11:09:55.446001  432231 cli_runner.go:164] Run: docker container inspect ha-847754 --format={{.State.Status}}
	I1213 11:09:55.474135  432231 status.go:371] ha-847754 host status = "Running" (err=<nil>)
	I1213 11:09:55.474159  432231 host.go:66] Checking if "ha-847754" exists ...
	I1213 11:09:55.474485  432231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-847754
	I1213 11:09:55.516951  432231 host.go:66] Checking if "ha-847754" exists ...
	I1213 11:09:55.517315  432231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:09:55.517379  432231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-847754
	I1213 11:09:55.542312  432231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/ha-847754/id_rsa Username:docker}
	I1213 11:09:55.657475  432231 ssh_runner.go:195] Run: systemctl --version
	I1213 11:09:55.664954  432231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:09:55.679295  432231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:09:55.739807  432231 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-13 11:09:55.729479205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:09:55.740344  432231 kubeconfig.go:125] found "ha-847754" server: "https://192.168.49.254:8443"
	I1213 11:09:55.740377  432231 api_server.go:166] Checking apiserver status ...
	I1213 11:09:55.740429  432231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:09:55.752149  432231 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1252/cgroup
	I1213 11:09:55.760441  432231 api_server.go:182] apiserver freezer: "7:freezer:/docker/a916d1ef6973839986a9cd84267a6cfcc1a977912971742f58e3841aed5f1515/crio/crio-55ec85e7efe91633459db9bd26c76e96f22f157a873ad1b82e710b5eeba53d49"
	I1213 11:09:55.760515  432231 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a916d1ef6973839986a9cd84267a6cfcc1a977912971742f58e3841aed5f1515/crio/crio-55ec85e7efe91633459db9bd26c76e96f22f157a873ad1b82e710b5eeba53d49/freezer.state
	I1213 11:09:55.768590  432231 api_server.go:204] freezer state: "THAWED"
	I1213 11:09:55.768617  432231 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 11:09:55.776876  432231 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 11:09:55.776907  432231 status.go:463] ha-847754 apiserver status = Running (err=<nil>)
	I1213 11:09:55.776919  432231 status.go:176] ha-847754 status: &{Name:ha-847754 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:09:55.776936  432231 status.go:174] checking status of ha-847754-m02 ...
	I1213 11:09:55.777256  432231 cli_runner.go:164] Run: docker container inspect ha-847754-m02 --format={{.State.Status}}
	I1213 11:09:55.796174  432231 status.go:371] ha-847754-m02 host status = "Stopped" (err=<nil>)
	I1213 11:09:55.796200  432231 status.go:384] host is not running, skipping remaining checks
	I1213 11:09:55.796214  432231 status.go:176] ha-847754-m02 status: &{Name:ha-847754-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:09:55.796233  432231 status.go:174] checking status of ha-847754-m03 ...
	I1213 11:09:55.796551  432231 cli_runner.go:164] Run: docker container inspect ha-847754-m03 --format={{.State.Status}}
	I1213 11:09:55.817108  432231 status.go:371] ha-847754-m03 host status = "Running" (err=<nil>)
	I1213 11:09:55.817136  432231 host.go:66] Checking if "ha-847754-m03" exists ...
	I1213 11:09:55.817443  432231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-847754-m03
	I1213 11:09:55.835842  432231 host.go:66] Checking if "ha-847754-m03" exists ...
	I1213 11:09:55.836196  432231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:09:55.836240  432231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-847754-m03
	I1213 11:09:55.853359  432231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/ha-847754-m03/id_rsa Username:docker}
	I1213 11:09:55.962607  432231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:09:55.978698  432231 kubeconfig.go:125] found "ha-847754" server: "https://192.168.49.254:8443"
	I1213 11:09:55.978729  432231 api_server.go:166] Checking apiserver status ...
	I1213 11:09:55.978785  432231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:09:55.991375  432231 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1207/cgroup
	I1213 11:09:56.000726  432231 api_server.go:182] apiserver freezer: "7:freezer:/docker/9c6018fcd5e15e8874e98153982086831c91f1bb74e96098ec3fc68b6458ace4/crio/crio-8ef3395607c6a276063d202abbaf3a6d634ff428deb1caebe4d933ad106df4a0"
	I1213 11:09:56.000800  432231 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9c6018fcd5e15e8874e98153982086831c91f1bb74e96098ec3fc68b6458ace4/crio/crio-8ef3395607c6a276063d202abbaf3a6d634ff428deb1caebe4d933ad106df4a0/freezer.state
	I1213 11:09:56.013543  432231 api_server.go:204] freezer state: "THAWED"
	I1213 11:09:56.013629  432231 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 11:09:56.022320  432231 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 11:09:56.022351  432231 status.go:463] ha-847754-m03 apiserver status = Running (err=<nil>)
	I1213 11:09:56.022361  432231 status.go:176] ha-847754-m03 status: &{Name:ha-847754-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:09:56.022395  432231 status.go:174] checking status of ha-847754-m04 ...
	I1213 11:09:56.022782  432231 cli_runner.go:164] Run: docker container inspect ha-847754-m04 --format={{.State.Status}}
	I1213 11:09:56.040689  432231 status.go:371] ha-847754-m04 host status = "Running" (err=<nil>)
	I1213 11:09:56.040716  432231 host.go:66] Checking if "ha-847754-m04" exists ...
	I1213 11:09:56.041030  432231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-847754-m04
	I1213 11:09:56.062012  432231 host.go:66] Checking if "ha-847754-m04" exists ...
	I1213 11:09:56.062337  432231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:09:56.062375  432231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-847754-m04
	I1213 11:09:56.081351  432231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/ha-847754-m04/id_rsa Username:docker}
	I1213 11:09:56.184978  432231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:09:56.201758  432231 status.go:176] ha-847754-m04 status: &{Name:ha-847754-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (30.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-847754 node start m02 --alsologtostderr -v 5: (28.906149593s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-847754 status --alsologtostderr -v 5: (1.335412399s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (30.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.244422103s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (128.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-847754 stop --alsologtostderr -v 5: (27.456888111s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 start --wait true --alsologtostderr -v 5
E1213 11:12:00.470533  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:12:09.713208  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:12:27.931026  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:12:28.171599  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-847754 start --wait true --alsologtostderr -v 5: (1m40.49839791s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (128.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-847754 node delete m03 --alsologtostderr -v 5: (11.450778373s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-847754 stop --alsologtostderr -v 5: (35.970946158s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-847754 status --alsologtostderr -v 5: exit status 7 (120.739582ms)

                                                
                                                
-- stdout --
	ha-847754
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-847754-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-847754-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:13:26.088100  444197 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:13:26.088234  444197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:13:26.088244  444197 out.go:374] Setting ErrFile to fd 2...
	I1213 11:13:26.088250  444197 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:13:26.088522  444197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:13:26.088719  444197 out.go:368] Setting JSON to false
	I1213 11:13:26.088753  444197 mustload.go:66] Loading cluster: ha-847754
	I1213 11:13:26.088850  444197 notify.go:221] Checking for updates...
	I1213 11:13:26.089203  444197 config.go:182] Loaded profile config "ha-847754": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:13:26.089228  444197 status.go:174] checking status of ha-847754 ...
	I1213 11:13:26.090077  444197 cli_runner.go:164] Run: docker container inspect ha-847754 --format={{.State.Status}}
	I1213 11:13:26.108130  444197 status.go:371] ha-847754 host status = "Stopped" (err=<nil>)
	I1213 11:13:26.108153  444197 status.go:384] host is not running, skipping remaining checks
	I1213 11:13:26.108161  444197 status.go:176] ha-847754 status: &{Name:ha-847754 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:13:26.108189  444197 status.go:174] checking status of ha-847754-m02 ...
	I1213 11:13:26.108491  444197 cli_runner.go:164] Run: docker container inspect ha-847754-m02 --format={{.State.Status}}
	I1213 11:13:26.135821  444197 status.go:371] ha-847754-m02 host status = "Stopped" (err=<nil>)
	I1213 11:13:26.135881  444197 status.go:384] host is not running, skipping remaining checks
	I1213 11:13:26.135902  444197 status.go:176] ha-847754-m02 status: &{Name:ha-847754-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:13:26.135935  444197 status.go:174] checking status of ha-847754-m04 ...
	I1213 11:13:26.136330  444197 cli_runner.go:164] Run: docker container inspect ha-847754-m04 --format={{.State.Status}}
	I1213 11:13:26.153750  444197 status.go:371] ha-847754-m04 host status = "Stopped" (err=<nil>)
	I1213 11:13:26.153772  444197 status.go:384] host is not running, skipping remaining checks
	I1213 11:13:26.153784  444197 status.go:176] ha-847754-m04 status: &{Name:ha-847754-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (68.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1213 11:14:06.640321  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-847754 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m7.838781783s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (68.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (52.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-847754 node add --control-plane --alsologtostderr -v 5: (51.912027901s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-847754 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-847754 status --alsologtostderr -v 5: (1.075858016s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (52.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.051364836s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.94s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-615758 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-615758 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (50.936725625s)
--- PASS: TestJSONOutput/start/Command (50.94s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-615758 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-615758 --output=json --user=testUser: (5.851363276s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-532644 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-532644 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (96.375777ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c1b382ce-315e-4bce-9cc5-47992485c3cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-532644] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f04d1c3b-59dd-402e-a251-fb35c2980835","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22127"}}
	{"specversion":"1.0","id":"fc1864a6-c060-4786-885c-acec39d4331a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ddbde0f2-8937-489b-aba9-71efbced75bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig"}}
	{"specversion":"1.0","id":"91500584-62ea-47e1-aca8-b3893037ff56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube"}}
	{"specversion":"1.0","id":"c5cbfe81-fe2a-4f4b-b4d4-334d1f588a08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1772c625-e91a-4781-b653-846c2ad9070f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bfe2ce9a-60a8-46a2-9bb6-79c7333881a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-532644" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-532644
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.23s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-817605 --network=
E1213 11:17:00.471706  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-817605 --network=: (37.985868242s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-817605" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-817605
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-817605: (2.221194628s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.23s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-835467 --network=bridge
E1213 11:17:27.930368  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-835467 --network=bridge: (34.98576886s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-835467" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-835467
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-835467: (2.090239163s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.10s)

                                                
                                    
x
+
TestKicExistingNetwork (32.98s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1213 11:18:02.906553  356328 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1213 11:18:02.922387  356328 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1213 11:18:02.922470  356328 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1213 11:18:02.922488  356328 cli_runner.go:164] Run: docker network inspect existing-network
W1213 11:18:02.936858  356328 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1213 11:18:02.936891  356328 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1213 11:18:02.936905  356328 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1213 11:18:02.937019  356328 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 11:18:02.955645  356328 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0545902499c2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:32:4c:cb:8d:7b} reservation:<nil>}
I1213 11:18:02.956039  356328 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4004e60bb0}
I1213 11:18:02.956066  356328 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1213 11:18:02.956115  356328 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1213 11:18:03.016389  356328 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-417096 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-417096 --network=existing-network: (30.713015353s)
helpers_test.go:176: Cleaning up "existing-network-417096" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-417096
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-417096: (2.128553414s)
I1213 11:18:35.874623  356328 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.98s)

                                                
                                    
x
+
TestKicCustomSubnet (37.78s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-260273 --subnet=192.168.60.0/24
E1213 11:19:06.643667  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-260273 --subnet=192.168.60.0/24: (35.550749775s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-260273 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-260273" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-260273
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-260273: (2.210884258s)
--- PASS: TestKicCustomSubnet (37.78s)

                                                
                                    
x
+
TestKicStaticIP (35.88s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-313183 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-313183 --static-ip=192.168.200.200: (33.486291236s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-313183 ip
helpers_test.go:176: Cleaning up "static-ip-313183" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-313183
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-313183: (2.23363948s)
--- PASS: TestKicStaticIP (35.88s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (75.76s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-267931 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-267931 --driver=docker  --container-runtime=crio: (33.687349226s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-270786 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-270786 --driver=docker  --container-runtime=crio: (35.84394908s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-267931
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-270786
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-270786" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-270786
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-270786: (2.084226122s)
helpers_test.go:176: Cleaning up "first-267931" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-267931
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-267931: (2.403267037s)
--- PASS: TestMinikubeProfile (75.76s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-986694 --memory=3072 --mount-string /tmp/TestMountStartserial1033410605/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-986694 --memory=3072 --mount-string /tmp/TestMountStartserial1033410605/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.922727813s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-986694 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.71s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-989040 --memory=3072 --mount-string /tmp/TestMountStartserial1033410605/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-989040 --memory=3072 --mount-string /tmp/TestMountStartserial1033410605/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.711117682s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-989040 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-986694 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-986694 --alsologtostderr -v=5: (1.715565718s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-989040 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-989040
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-989040: (1.286487197s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.01s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-989040
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-989040: (7.011968786s)
--- PASS: TestMountStart/serial/RestartStopped (8.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-989040 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (81.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-041361 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1213 11:22:00.469930  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:22:11.002728  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:22:27.930424  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-041361 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m20.712854544s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (81.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-041361 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-041361 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-041361 -- rollout status deployment/busybox: (3.243993484s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-041361 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-041361 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-041361 -- exec busybox-7b57f96db7-9b75t -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-041361 -- exec busybox-7b57f96db7-bcpqx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-041361 -- exec busybox-7b57f96db7-9b75t -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-041361 -- exec busybox-7b57f96db7-bcpqx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-041361 -- exec busybox-7b57f96db7-9b75t -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-041361 -- exec busybox-7b57f96db7-bcpqx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.98s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-041361 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-041361 -- exec busybox-7b57f96db7-9b75t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-041361 -- exec busybox-7b57f96db7-9b75t -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-041361 -- exec busybox-7b57f96db7-bcpqx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-041361 -- exec busybox-7b57f96db7-bcpqx -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-041361 -v=5 --alsologtostderr
E1213 11:23:23.533060  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-041361 -v=5 --alsologtostderr: (28.624429143s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.33s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-041361 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 cp testdata/cp-test.txt multinode-041361:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 ssh -n multinode-041361 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 cp multinode-041361:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile544561655/001/cp-test_multinode-041361.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 ssh -n multinode-041361 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 cp multinode-041361:/home/docker/cp-test.txt multinode-041361-m02:/home/docker/cp-test_multinode-041361_multinode-041361-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 ssh -n multinode-041361 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 ssh -n multinode-041361-m02 "sudo cat /home/docker/cp-test_multinode-041361_multinode-041361-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 cp multinode-041361:/home/docker/cp-test.txt multinode-041361-m03:/home/docker/cp-test_multinode-041361_multinode-041361-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 ssh -n multinode-041361 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 ssh -n multinode-041361-m03 "sudo cat /home/docker/cp-test_multinode-041361_multinode-041361-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 cp testdata/cp-test.txt multinode-041361-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 ssh -n multinode-041361-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 cp multinode-041361-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile544561655/001/cp-test_multinode-041361-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 ssh -n multinode-041361-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 cp multinode-041361-m02:/home/docker/cp-test.txt multinode-041361:/home/docker/cp-test_multinode-041361-m02_multinode-041361.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 ssh -n multinode-041361-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 ssh -n multinode-041361 "sudo cat /home/docker/cp-test_multinode-041361-m02_multinode-041361.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 cp multinode-041361-m02:/home/docker/cp-test.txt multinode-041361-m03:/home/docker/cp-test_multinode-041361-m02_multinode-041361-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 ssh -n multinode-041361-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 ssh -n multinode-041361-m03 "sudo cat /home/docker/cp-test_multinode-041361-m02_multinode-041361-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 cp testdata/cp-test.txt multinode-041361-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 ssh -n multinode-041361-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 cp multinode-041361-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile544561655/001/cp-test_multinode-041361-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 ssh -n multinode-041361-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 cp multinode-041361-m03:/home/docker/cp-test.txt multinode-041361:/home/docker/cp-test_multinode-041361-m03_multinode-041361.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 ssh -n multinode-041361-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 ssh -n multinode-041361 "sudo cat /home/docker/cp-test_multinode-041361-m03_multinode-041361.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 cp multinode-041361-m03:/home/docker/cp-test.txt multinode-041361-m02:/home/docker/cp-test_multinode-041361-m03_multinode-041361-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 ssh -n multinode-041361-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 ssh -n multinode-041361-m02 "sudo cat /home/docker/cp-test_multinode-041361-m03_multinode-041361-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-041361 node stop m03: (1.519343742s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-041361 status: exit status 7 (552.681587ms)

                                                
                                                
-- stdout --
	multinode-041361
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-041361-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-041361-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-041361 status --alsologtostderr: exit status 7 (568.366886ms)

                                                
                                                
-- stdout --
	multinode-041361
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-041361-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-041361-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:23:47.076285  495017 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:23:47.076518  495017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:23:47.076533  495017 out.go:374] Setting ErrFile to fd 2...
	I1213 11:23:47.076539  495017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:23:47.076815  495017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:23:47.077016  495017 out.go:368] Setting JSON to false
	I1213 11:23:47.077049  495017 mustload.go:66] Loading cluster: multinode-041361
	I1213 11:23:47.077163  495017 notify.go:221] Checking for updates...
	I1213 11:23:47.077484  495017 config.go:182] Loaded profile config "multinode-041361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:23:47.077501  495017 status.go:174] checking status of multinode-041361 ...
	I1213 11:23:47.078284  495017 cli_runner.go:164] Run: docker container inspect multinode-041361 --format={{.State.Status}}
	I1213 11:23:47.100217  495017 status.go:371] multinode-041361 host status = "Running" (err=<nil>)
	I1213 11:23:47.100245  495017 host.go:66] Checking if "multinode-041361" exists ...
	I1213 11:23:47.100560  495017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-041361
	I1213 11:23:47.130452  495017 host.go:66] Checking if "multinode-041361" exists ...
	I1213 11:23:47.130783  495017 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:23:47.130830  495017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041361
	I1213 11:23:47.149848  495017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/multinode-041361/id_rsa Username:docker}
	I1213 11:23:47.253454  495017 ssh_runner.go:195] Run: systemctl --version
	I1213 11:23:47.260389  495017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:23:47.276126  495017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:23:47.346937  495017 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-13 11:23:47.337530711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:23:47.347497  495017 kubeconfig.go:125] found "multinode-041361" server: "https://192.168.67.2:8443"
	I1213 11:23:47.347565  495017 api_server.go:166] Checking apiserver status ...
	I1213 11:23:47.347615  495017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 11:23:47.360065  495017 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1253/cgroup
	I1213 11:23:47.368615  495017 api_server.go:182] apiserver freezer: "7:freezer:/docker/d3d441342d957c0c60f5fe134778c910fd880f4e5065fb51af77d8edfea5346b/crio/crio-234631d8e177c81a0d2a916f3056bf10ac263c834824efe12590d916d8305b57"
	I1213 11:23:47.368718  495017 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d3d441342d957c0c60f5fe134778c910fd880f4e5065fb51af77d8edfea5346b/crio/crio-234631d8e177c81a0d2a916f3056bf10ac263c834824efe12590d916d8305b57/freezer.state
	I1213 11:23:47.377584  495017 api_server.go:204] freezer state: "THAWED"
	I1213 11:23:47.377621  495017 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1213 11:23:47.385722  495017 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1213 11:23:47.385750  495017 status.go:463] multinode-041361 apiserver status = Running (err=<nil>)
	I1213 11:23:47.385774  495017 status.go:176] multinode-041361 status: &{Name:multinode-041361 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:23:47.385791  495017 status.go:174] checking status of multinode-041361-m02 ...
	I1213 11:23:47.386106  495017 cli_runner.go:164] Run: docker container inspect multinode-041361-m02 --format={{.State.Status}}
	I1213 11:23:47.403836  495017 status.go:371] multinode-041361-m02 host status = "Running" (err=<nil>)
	I1213 11:23:47.403862  495017 host.go:66] Checking if "multinode-041361-m02" exists ...
	I1213 11:23:47.404189  495017 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-041361-m02
	I1213 11:23:47.426445  495017 host.go:66] Checking if "multinode-041361-m02" exists ...
	I1213 11:23:47.426777  495017 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 11:23:47.426838  495017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-041361-m02
	I1213 11:23:47.444290  495017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33288 SSHKeyPath:/home/jenkins/minikube-integration/22127-354468/.minikube/machines/multinode-041361-m02/id_rsa Username:docker}
	I1213 11:23:47.549686  495017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 11:23:47.562829  495017 status.go:176] multinode-041361-m02 status: &{Name:multinode-041361-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:23:47.562864  495017 status.go:174] checking status of multinode-041361-m03 ...
	I1213 11:23:47.563178  495017 cli_runner.go:164] Run: docker container inspect multinode-041361-m03 --format={{.State.Status}}
	I1213 11:23:47.582480  495017 status.go:371] multinode-041361-m03 host status = "Stopped" (err=<nil>)
	I1213 11:23:47.582505  495017 status.go:384] host is not running, skipping remaining checks
	I1213 11:23:47.582513  495017 status.go:176] multinode-041361-m03 status: &{Name:multinode-041361-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.64s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-041361 node start m03 -v=5 --alsologtostderr: (7.535746904s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-041361
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-041361
E1213 11:24:06.644476  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-041361: (25.161508772s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-041361 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-041361 --wait=true -v=5 --alsologtostderr: (48.466597115s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-041361
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-041361 node delete m03: (4.922674413s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-041361 stop: (23.788134879s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-041361 status: exit status 7 (101.381134ms)

                                                
                                                
-- stdout --
	multinode-041361
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-041361-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-041361 status --alsologtostderr: exit status 7 (91.067174ms)

                                                
                                                
-- stdout --
	multinode-041361
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-041361-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:25:39.262675  502850 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:25:39.262872  502850 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:25:39.262903  502850 out.go:374] Setting ErrFile to fd 2...
	I1213 11:25:39.262932  502850 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:25:39.263301  502850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:25:39.263590  502850 out.go:368] Setting JSON to false
	I1213 11:25:39.263643  502850 mustload.go:66] Loading cluster: multinode-041361
	I1213 11:25:39.264331  502850 config.go:182] Loaded profile config "multinode-041361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:25:39.264380  502850 status.go:174] checking status of multinode-041361 ...
	I1213 11:25:39.265455  502850 notify.go:221] Checking for updates...
	I1213 11:25:39.265670  502850 cli_runner.go:164] Run: docker container inspect multinode-041361 --format={{.State.Status}}
	I1213 11:25:39.284558  502850 status.go:371] multinode-041361 host status = "Stopped" (err=<nil>)
	I1213 11:25:39.284585  502850 status.go:384] host is not running, skipping remaining checks
	I1213 11:25:39.284592  502850 status.go:176] multinode-041361 status: &{Name:multinode-041361 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 11:25:39.284619  502850 status.go:174] checking status of multinode-041361-m02 ...
	I1213 11:25:39.284997  502850 cli_runner.go:164] Run: docker container inspect multinode-041361-m02 --format={{.State.Status}}
	I1213 11:25:39.309082  502850 status.go:371] multinode-041361-m02 host status = "Stopped" (err=<nil>)
	I1213 11:25:39.309103  502850 status.go:384] host is not running, skipping remaining checks
	I1213 11:25:39.309110  502850 status.go:176] multinode-041361-m02 status: &{Name:multinode-041361-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-041361 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-041361 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (52.036646618s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-041361 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.77s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-041361
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-041361-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-041361-m02 --driver=docker  --container-runtime=crio: exit status 14 (94.143098ms)

                                                
                                                
-- stdout --
	* [multinode-041361-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-041361-m02' is duplicated with machine name 'multinode-041361-m02' in profile 'multinode-041361'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-041361-m03 --driver=docker  --container-runtime=crio
E1213 11:27:00.471850  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-041361-m03 --driver=docker  --container-runtime=crio: (35.11450694s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-041361
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-041361: exit status 80 (338.159153ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-041361 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-041361-m03 already exists in multinode-041361-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-041361-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-041361-m03: (2.068528075s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.66s)

                                                
                                    
x
+
TestPreload (122.52s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-443829 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1213 11:27:27.930946  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-443829 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (1m0.955176385s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-443829 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-443829 image pull gcr.io/k8s-minikube/busybox: (2.119172149s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-443829
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-443829: (5.902762811s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-443829 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1213 11:28:49.714926  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:29:06.640395  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-443829 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (50.875068418s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-443829 image list
helpers_test.go:176: Cleaning up "test-preload-443829" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-443829
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-443829: (2.421450475s)
--- PASS: TestPreload (122.52s)

                                                
                                    
x
+
TestScheduledStopUnix (103.75s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-376509 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-376509 --memory=3072 --driver=docker  --container-runtime=crio: (27.739175484s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-376509 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 11:29:44.413790  516896 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:29:44.413931  516896 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:29:44.413942  516896 out.go:374] Setting ErrFile to fd 2...
	I1213 11:29:44.413948  516896 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:29:44.414248  516896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:29:44.414631  516896 out.go:368] Setting JSON to false
	I1213 11:29:44.414759  516896 mustload.go:66] Loading cluster: scheduled-stop-376509
	I1213 11:29:44.415168  516896 config.go:182] Loaded profile config "scheduled-stop-376509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:29:44.415253  516896 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/scheduled-stop-376509/config.json ...
	I1213 11:29:44.415460  516896 mustload.go:66] Loading cluster: scheduled-stop-376509
	I1213 11:29:44.415640  516896 config.go:182] Loaded profile config "scheduled-stop-376509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-376509 -n scheduled-stop-376509
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-376509 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 11:29:44.886150  516984 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:29:44.886326  516984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:29:44.886356  516984 out.go:374] Setting ErrFile to fd 2...
	I1213 11:29:44.886377  516984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:29:44.886621  516984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:29:44.886924  516984 out.go:368] Setting JSON to false
	I1213 11:29:44.887153  516984 daemonize_unix.go:73] killing process 516912 as it is an old scheduled stop
	I1213 11:29:44.890597  516984 mustload.go:66] Loading cluster: scheduled-stop-376509
	I1213 11:29:44.891065  516984 config.go:182] Loaded profile config "scheduled-stop-376509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:29:44.891177  516984 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/scheduled-stop-376509/config.json ...
	I1213 11:29:44.891387  516984 mustload.go:66] Loading cluster: scheduled-stop-376509
	I1213 11:29:44.891556  516984 config.go:182] Loaded profile config "scheduled-stop-376509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1213 11:29:44.896994  356328 retry.go:31] will retry after 116.931µs: open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/scheduled-stop-376509/pid: no such file or directory
I1213 11:29:44.899509  356328 retry.go:31] will retry after 188.917µs: open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/scheduled-stop-376509/pid: no such file or directory
I1213 11:29:44.900678  356328 retry.go:31] will retry after 208.696µs: open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/scheduled-stop-376509/pid: no such file or directory
I1213 11:29:44.901802  356328 retry.go:31] will retry after 495.275µs: open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/scheduled-stop-376509/pid: no such file or directory
I1213 11:29:44.902927  356328 retry.go:31] will retry after 334.165µs: open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/scheduled-stop-376509/pid: no such file or directory
I1213 11:29:44.904046  356328 retry.go:31] will retry after 1.132271ms: open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/scheduled-stop-376509/pid: no such file or directory
I1213 11:29:44.906244  356328 retry.go:31] will retry after 1.416222ms: open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/scheduled-stop-376509/pid: no such file or directory
I1213 11:29:44.908447  356328 retry.go:31] will retry after 2.058347ms: open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/scheduled-stop-376509/pid: no such file or directory
I1213 11:29:44.910576  356328 retry.go:31] will retry after 3.064883ms: open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/scheduled-stop-376509/pid: no such file or directory
I1213 11:29:44.914766  356328 retry.go:31] will retry after 2.207144ms: open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/scheduled-stop-376509/pid: no such file or directory
I1213 11:29:44.917976  356328 retry.go:31] will retry after 4.565012ms: open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/scheduled-stop-376509/pid: no such file or directory
I1213 11:29:44.923192  356328 retry.go:31] will retry after 9.871435ms: open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/scheduled-stop-376509/pid: no such file or directory
I1213 11:29:44.933635  356328 retry.go:31] will retry after 19.11793ms: open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/scheduled-stop-376509/pid: no such file or directory
I1213 11:29:44.954494  356328 retry.go:31] will retry after 10.414746ms: open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/scheduled-stop-376509/pid: no such file or directory
I1213 11:29:44.965756  356328 retry.go:31] will retry after 32.336563ms: open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/scheduled-stop-376509/pid: no such file or directory
I1213 11:29:44.999013  356328 retry.go:31] will retry after 64.422342ms: open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/scheduled-stop-376509/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-376509 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-376509 -n scheduled-stop-376509
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-376509
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-376509 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 11:30:10.908640  517460 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:30:10.908851  517460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:30:10.908879  517460 out.go:374] Setting ErrFile to fd 2...
	I1213 11:30:10.908899  517460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:30:10.909294  517460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:30:10.909706  517460 out.go:368] Setting JSON to false
	I1213 11:30:10.909884  517460 mustload.go:66] Loading cluster: scheduled-stop-376509
	I1213 11:30:10.910560  517460 config.go:182] Loaded profile config "scheduled-stop-376509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 11:30:10.910706  517460 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/scheduled-stop-376509/config.json ...
	I1213 11:30:10.910940  517460 mustload.go:66] Loading cluster: scheduled-stop-376509
	I1213 11:30:10.911133  517460 config.go:182] Loaded profile config "scheduled-stop-376509": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-376509
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-376509: exit status 7 (72.748828ms)

                                                
                                                
-- stdout --
	scheduled-stop-376509
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-376509 -n scheduled-stop-376509
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-376509 -n scheduled-stop-376509: exit status 7 (76.415514ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-376509" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-376509
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-376509: (4.29224775s)
--- PASS: TestScheduledStopUnix (103.75s)

                                                
                                    
x
+
TestInsufficientStorage (12.84s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-625260 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-625260 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.238204068s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8a2a169c-6b25-47ab-a7e8-a8221b3bba7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-625260] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2b89636f-951a-4366-b2cb-016d13f60edd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22127"}}
	{"specversion":"1.0","id":"bc65d92c-4a15-4545-83a8-23c2eb98e059","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ebeead02-addc-474d-9d69-408553f3309f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig"}}
	{"specversion":"1.0","id":"ede06112-7fbd-4f4d-8be4-f621817ac30e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube"}}
	{"specversion":"1.0","id":"a9989a1b-7ab8-41c1-8c07-10d1b429d856","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f8097738-b383-43a4-9cc4-57230049e78a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bbc326b1-7993-49e6-aeef-308c97f9f36a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3ccd2c37-0989-4851-90ab-521268f2da7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3b2b393c-30af-4c36-aac4-8ecd954ee772","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5316c4b4-c814-47ed-8db3-c7404fcf637c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"63e63bf5-879d-4a3b-a196-3e2c37f1bb06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-625260\" primary control-plane node in \"insufficient-storage-625260\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2994b1f8-27db-46b2-897c-c638b1c732e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765275396-22083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"da5d9ee8-8873-4ff3-b5b6-96f79e4f51a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7a7646d3-8b87-4e70-8ded-b62877431ed1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-625260 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-625260 --output=json --layout=cluster: exit status 7 (300.07167ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-625260","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-625260","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 11:31:10.885023  519339 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-625260" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-625260 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-625260 --output=json --layout=cluster: exit status 7 (310.513093ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-625260","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-625260","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 11:31:11.193467  519410 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-625260" does not appear in /home/jenkins/minikube-integration/22127-354468/kubeconfig
	E1213 11:31:11.204214  519410 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/insufficient-storage-625260/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-625260" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-625260
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-625260: (1.990042511s)
--- PASS: TestInsufficientStorage (12.84s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (301.43s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1221296494 start -p running-upgrade-686784 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1213 11:38:51.004830  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1221296494 start -p running-upgrade-686784 --memory=3072 --vm-driver=docker  --container-runtime=crio: (33.606316053s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-686784 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1213 11:39:06.640034  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:40:03.534856  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:42:00.470935  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:42:27.931490  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-686784 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.489817284s)
helpers_test.go:176: Cleaning up "running-upgrade-686784" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-686784
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-686784: (2.01141068s)
--- PASS: TestRunningBinaryUpgrade (301.43s)

                                                
                                    
x
+
TestMissingContainerUpgrade (110.63s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.314221586 start -p missing-upgrade-438132 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.314221586 start -p missing-upgrade-438132 --memory=3072 --driver=docker  --container-runtime=crio: (1m2.660562874s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-438132
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-438132
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-438132 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-438132 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.918407626s)
helpers_test.go:176: Cleaning up "missing-upgrade-438132" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-438132
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-438132: (2.403454408s)
--- PASS: TestMissingContainerUpgrade (110.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-627673 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-627673 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (92.409491ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-627673] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-627673 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-627673 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.630024247s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-627673 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (31.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-627673 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1213 11:32:00.474482  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-627673 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.092486913s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-627673 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-627673 status -o json: exit status 2 (315.050069ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-627673","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-627673
E1213 11:32:27.931069  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-627673: (2.000592763s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (31.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-627673 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-627673 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (11.448330846s)
--- PASS: TestNoKubernetes/serial/Start (11.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22127-354468/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-627673 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-627673 "sudo systemctl is-active --quiet service kubelet": exit status 1 (510.677486ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-627673
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-627673: (1.383549444s)
--- PASS: TestNoKubernetes/serial/Stop (1.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-627673 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-627673 --driver=docker  --container-runtime=crio: (7.694176915s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-627673 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-627673 "sudo systemctl is-active --quiet service kubelet": exit status 1 (401.491781ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (307.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.99940439 start -p stopped-upgrade-558323 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.99940439 start -p stopped-upgrade-558323 --memory=3072 --vm-driver=docker  --container-runtime=crio: (36.496027113s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.99940439 -p stopped-upgrade-558323 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.99940439 -p stopped-upgrade-558323 stop: (1.273708234s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-558323 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1213 11:34:06.639720  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:37:00.470957  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 11:37:27.930756  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-558323 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.071454793s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (307.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-558323
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-558323: (1.940779707s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.94s)

                                                
                                    
x
+
TestPause/serial/Start (57.3s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-649359 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1213 11:44:06.644039  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-649359 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (57.295669125s)
--- PASS: TestPause/serial/Start (57.30s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.65s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-649359 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-649359 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.639751205s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-062409 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-062409 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (201.190811ms)

                                                
                                                
-- stdout --
	* [false-062409] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22127
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 11:45:35.568844  573862 out.go:360] Setting OutFile to fd 1 ...
	I1213 11:45:35.569072  573862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:45:35.569103  573862 out.go:374] Setting ErrFile to fd 2...
	I1213 11:45:35.569123  573862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 11:45:35.569388  573862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-354468/.minikube/bin
	I1213 11:45:35.569822  573862 out.go:368] Setting JSON to false
	I1213 11:45:35.570709  573862 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12488,"bootTime":1765613848,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1213 11:45:35.570800  573862 start.go:143] virtualization:  
	I1213 11:45:35.574383  573862 out.go:179] * [false-062409] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1213 11:45:35.578334  573862 out.go:179]   - MINIKUBE_LOCATION=22127
	I1213 11:45:35.578493  573862 notify.go:221] Checking for updates...
	I1213 11:45:35.584423  573862 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 11:45:35.587377  573862 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22127-354468/kubeconfig
	I1213 11:45:35.590279  573862 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-354468/.minikube
	I1213 11:45:35.593111  573862 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 11:45:35.596143  573862 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 11:45:35.599742  573862 config.go:182] Loaded profile config "kubernetes-upgrade-854588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 11:45:35.599878  573862 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 11:45:35.643686  573862 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1213 11:45:35.643841  573862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 11:45:35.697919  573862 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-13 11:45:35.688643594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1213 11:45:35.698027  573862 docker.go:319] overlay module found
	I1213 11:45:35.703003  573862 out.go:179] * Using the docker driver based on user configuration
	I1213 11:45:35.705950  573862 start.go:309] selected driver: docker
	I1213 11:45:35.705977  573862 start.go:927] validating driver "docker" against <nil>
	I1213 11:45:35.705993  573862 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 11:45:35.709656  573862 out.go:203] 
	W1213 11:45:35.712584  573862 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1213 11:45:35.715408  573862 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-062409 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-062409

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-062409

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-062409

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-062409

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-062409

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-062409

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-062409

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-062409

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-062409

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-062409

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-062409

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-062409" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-062409" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 11:33:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-854588
contexts:
- context:
cluster: kubernetes-upgrade-854588
user: kubernetes-upgrade-854588
name: kubernetes-upgrade-854588
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-854588
user:
client-certificate: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kubernetes-upgrade-854588/client.crt
client-key: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kubernetes-upgrade-854588/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-062409

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-062409"

                                                
                                                
----------------------- debugLogs end: false-062409 [took: 3.257927912s] --------------------------------
helpers_test.go:176: Cleaning up "false-062409" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-062409
--- PASS: TestNetworkPlugins/group/false (3.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (62.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-051699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1213 11:47:27.930732  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-051699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m2.39039445s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (62.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-051699 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c750a8f1-85bf-45da-b73b-d38717856602] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [c750a8f1-85bf-45da-b73b-d38717856602] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003621725s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-051699 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-051699 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-051699 --alsologtostderr -v=3: (12.009067919s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-051699 -n old-k8s-version-051699
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-051699 -n old-k8s-version-051699: exit status 7 (72.381011ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-051699 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (54.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-051699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1213 11:49:06.639996  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-371413/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-051699 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (54.445685204s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-051699 -n old-k8s-version-051699
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (54.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-jpkrw" [76679ef8-1925-4cdb-9473-1acc7e6609c7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003366391s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-jpkrw" [76679ef8-1925-4cdb-9473-1acc7e6609c7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003875442s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-051699 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-051699 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (58.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (58.944354401s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (58.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (56.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (56.423009912s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (56.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-151605 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0e627c31-a482-4a58-a8f5-410ea307b7ed] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0e627c31-a482-4a58-a8f5-410ea307b7ed] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003676825s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-151605 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-151605 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-151605 --alsologtostderr -v=3: (12.070729441s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-326948 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [27e51c4b-ab88-4f0c-a4c9-d056eb521aca] Pending
helpers_test.go:353: "busybox" [27e51c4b-ab88-4f0c-a4c9-d056eb521aca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [27e51c4b-ab88-4f0c-a4c9-d056eb521aca] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.006875555s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-326948 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-151605 -n default-k8s-diff-port-151605
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-151605 -n default-k8s-diff-port-151605: exit status 7 (64.949847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-151605 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-151605 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (52.194108783s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-151605 -n default-k8s-diff-port-151605
E1213 11:52:00.470824  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/functional-407525/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-326948 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-326948 --alsologtostderr -v=3: (12.93941325s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-326948 -n embed-certs-326948
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-326948 -n embed-certs-326948: exit status 7 (129.993368ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-326948 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-326948 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (48.054613205s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-326948 -n embed-certs-326948
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-2j5n9" [f4636eef-77b2-455c-a3f1-d90d2318c5ec] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002811155s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-2j5n9" [f4636eef-77b2-455c-a3f1-d90d2318c5ec] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003536437s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-151605 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-151605 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-s4wkb" [97973cdf-f52e-4441-a054-20360ea34720] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004977916s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-s4wkb" [97973cdf-f52e-4441-a054-20360ea34720] Running
E1213 11:52:27.930680  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/addons-543946/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004844785s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-326948 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-326948 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-800979 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-800979 --alsologtostderr -v=3: (1.289933257s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-800979 -n newest-cni-800979
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-800979 -n newest-cni-800979: exit status 7 (82.559428ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-800979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-307409 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-307409 --alsologtostderr -v=3: (1.313170049s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-307409 -n no-preload-307409
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-307409 -n no-preload-307409: exit status 7 (68.760915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-307409 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-800979 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (53.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-062409 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-062409 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (53.173668321s)
--- PASS: TestNetworkPlugins/group/auto/Start (53.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-062409 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-062409 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-hmj9t" [cd7eb763-e219-4a69-8713-cd13fd6b6f37] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-hmj9t" [cd7eb763-e219-4a69-8713-cd13fd6b6f37] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.00354516s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-062409 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-062409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-062409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-062409 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-062409 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (51.621468403s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-thlp6" [802f4e25-c822-41dd-84c1-9af764d5f3d6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003362189s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-062409 "pgrep -a kubelet"
I1213 12:11:43.708618  356328 config.go:182] Loaded profile config "kindnet-062409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-062409 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-sv6wr" [4295ea02-608e-4637-b005-1d08a5f20a8b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-sv6wr" [4295ea02-608e-4637-b005-1d08a5f20a8b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003601991s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-062409 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-062409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-062409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (66.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-062409 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-062409 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m6.561982147s)
--- PASS: TestNetworkPlugins/group/calico/Start (66.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-9b5cr" [8ffd6d30-ff77-4fe2-8887-eb0063c50b5d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003142106s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-062409 "pgrep -a kubelet"
I1213 12:13:28.053143  356328 config.go:182] Loaded profile config "calico-062409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-062409 replace --force -f testdata/netcat-deployment.yaml
I1213 12:13:28.325103  356328 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-w2m26" [cc3559a1-f467-44b9-86c5-9c1a8288d7aa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-w2m26" [cc3559a1-f467-44b9-86c5-9c1a8288d7aa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004242635s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-062409 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-062409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-062409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-062409 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-062409 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (54.921999947s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-062409 "pgrep -a kubelet"
I1213 12:14:55.013018  356328 config.go:182] Loaded profile config "custom-flannel-062409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-062409 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-lkp6l" [71454fe1-ddda-4fbf-8a87-c392f4746310] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-lkp6l" [71454fe1-ddda-4fbf-8a87-c392f4746310] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003356309s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-062409 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-062409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-062409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (54.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-062409 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-062409 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (54.429064446s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (54.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-062409 "pgrep -a kubelet"
I1213 12:16:21.947407  356328 config.go:182] Loaded profile config "enable-default-cni-062409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-062409 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-xwwvt" [98bd22a5-b617-4068-96d1-4ca3db00935d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-xwwvt" [98bd22a5-b617-4068-96d1-4ca3db00935d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003850469s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-062409 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-062409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-062409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-062409 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-062409 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (56.747628131s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-kc75r" [83ffd130-c40d-4d2c-9bcb-b6c2ee4c7c29] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00414s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-062409 "pgrep -a kubelet"
I1213 12:17:54.947590  356328 config.go:182] Loaded profile config "flannel-062409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-062409 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-klb6b" [185c77c2-2e5a-4438-9e58-92d86916a25b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-klb6b" [185c77c2-2e5a-4438-9e58-92d86916a25b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003745692s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-062409 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-062409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-062409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-062409 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-062409 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m17.120121656s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-062409 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-062409 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-445hw" [cec60feb-f803-4dc8-8c5e-0cee32746936] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-445hw" [cec60feb-f803-4dc8-8c5e-0cee32746936] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003930238s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-062409 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-062409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1213 12:19:56.593801  356328 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/custom-flannel-062409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-062409 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (39/412)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.43
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
155 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
156 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
379 TestStartStop/group/disable-driver-mounts 0.17
387 TestNetworkPlugins/group/kubenet 3.49
395 TestNetworkPlugins/group/cilium 3.9
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-135245 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-135245" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-135245
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-072590" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-072590
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-062409 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-062409

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-062409

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-062409

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-062409

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-062409

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-062409

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-062409

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-062409

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-062409

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-062409

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-062409

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-062409" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-062409" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 11:33:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-854588
contexts:
- context:
cluster: kubernetes-upgrade-854588
user: kubernetes-upgrade-854588
name: kubernetes-upgrade-854588
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-854588
user:
client-certificate: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kubernetes-upgrade-854588/client.crt
client-key: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kubernetes-upgrade-854588/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-062409

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-062409"

                                                
                                                
----------------------- debugLogs end: kubenet-062409 [took: 3.331737136s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-062409" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-062409
--- SKIP: TestNetworkPlugins/group/kubenet (3.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-062409 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-062409

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-062409

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-062409

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-062409

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-062409

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-062409

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-062409

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-062409

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-062409

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-062409

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-062409

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-062409" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-062409

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-062409

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-062409

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-062409

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-062409" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-062409" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22127-354468/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 11:33:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-854588
contexts:
- context:
cluster: kubernetes-upgrade-854588
user: kubernetes-upgrade-854588
name: kubernetes-upgrade-854588
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-854588
user:
client-certificate: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kubernetes-upgrade-854588/client.crt
client-key: /home/jenkins/minikube-integration/22127-354468/.minikube/profiles/kubernetes-upgrade-854588/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-062409

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-062409" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-062409"

                                                
                                                
----------------------- debugLogs end: cilium-062409 [took: 3.742449432s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-062409" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-062409
--- SKIP: TestNetworkPlugins/group/cilium (3.90s)

                                                
                                    
Copied to clipboard